var/home/core/zuul-output/0000755000175000017500000000000015137217141014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137235016015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000342415015137234661020267 0ustar corecore9}ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD -/@}V3ȋI_翪|mvşo#oVݏKf+ovpZj>?xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3x~JkwI|YowS˷j̶֛]/8 N Rm(of`\r\L>{Jm 0{vR̍>dQQ.aLk~g\UlxDJfw6xi1U2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uzTɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`岮X7̝4?qKf, # qe䧤 ss]QzH.ad!rJBi`V +|i}}THW{y|*/BP3m3A- ZPmN^iL[NrrݝE)~QGGAj^3}wy/{47[q)&c(޸0"$5ڪҾη*t:%?vEmO5tqÜ3Cyu '~qlN?}|nLFR6f8yWxYd ;K44|CK4UQviYDZh$#*)e\W$IAT;s0Gp}=9ڠedۜ+EaH#QtDV:?7#w4r_۾8ZJ%PgS!][5ߜQZ݇~- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)ӨbqMWNjʵ2PK-guZZg !M)a(!H/?R?Q~}% ;]/ľv%T&hoP~(*טj=dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aGd;0RZ+ 9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=d]' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGt?~=ˢ>f>\bN<Ⱦtë{{b2hKNh`0=/9Gɺɔ+'Х[)9^iX,N&+1Id0ֶ|}!oѶvhu|8Qz:^S-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAWEݰLkS :}%J6TIsbFʶ褢sFUC)(k-C"TQ[;4j39_WiZSس:$3w}o$[4x:bl=pd9YfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0@4V{g6R/wD_tՄ.F+HP'AE; J j"b~PO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\bSQp#YI$A@EEdT+w';'A7㢢V"+aQ33^ќz9Ӂ;=^ۭ7h9 lr_qSq-XbsK، JBJbeOfOAsg31zYYy[N 1m٢ڶEͦAc?-֋6rR)? I?ytwpC'P/9} ƘwXe就9bQQ!.(GNp$d(3 %רx%z(o6jp}vE#!3M. x!0=k$}  L&T+̔6vmEl 05 D"wO>"J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞tӱ&Jy%١oBbFM=$OQYꐙ^=Zza5a%פG,ϒPV3^KPbGVO'daOU%tt!ƖRG9lhfd#]y=DFT8F}$RD<8 ].v\-v:8F+Mt|ga.!! р#ݴtӫߴ]vWͽ2]Q6Û͘`_}KnK"]p<)Xg '鸽= &Xu=y`g[#ɯO"?5Vg3gR(Җ}f`ӀSqUق0D L?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W>u e\ŵ0Dsl HmG19|vB!+jmEw4êXv؝Atl_yr5]PmXV Q BcZB" sOw@BRla/[1=A8EIr.!1el!-8-6oေ-Kׁa%FN:ztX&'-DԠХO] gjHHj|`yoG+Xpwq8-$kU@mXǕ{QZc`p^>& lH2"h1]*0)sUCՂ2^+ti; ;V3R)0`:$'*D-!g:4zK"Xpr!*NaeH'Z=ӹՐ6@>{Ž,AhɧS3ՉXOYž/f" cBaI6TCTBچaXU(!C qՃ`愨cp*Ycr\Z^dn U-fAzS\V %8F- ՟ ~p+Qx{*I!kFL~g# H_Wb |6b \v)qzIoKcw *6:/i^ۿ o,x=(>rptӸ}PU=<Z>J jnlU|X Y Mn5hlU!8 XZtA# q- m5 Cm; '2Z? gXHZg`S;R{ޑFq5ʴ<"NLϰ_uŀyj-ǚ"9"^Bw6,e-#q@&5[Z,Gn(Bn#ނ[j[x].Zrd#[%u1<2B3Y$3D~s{nX|~q9ʡ]DͦbVc8ضaW-,tKո{?!L!j*Qy*Бx\ 4P [|%zZ^q3)Б\.Ad.zzI}L j|l#(* M $.=–WV*&VRW{3ZRTh!m86- ݕa؅iv`@Hr\q9X!^zPu(qz|P`peK Hjوi_Wy9e''+(ņ.}IZ$tuDi> u}~lB bvad Lj+. yȻ ?3~rSR&}[UbY|d%^oc9NwBH:A,uJ'쎖&u7Τ+@6بW&$ 4WJJЕ&4n#WI3Ԛ fDr?wgv7]\5i-T%~cm ;}l3} [~V[jvd! ".rhukaVݐ/- d+pL+@Ӌdď+CzLfx"Ö'婃\2WE1.m%[KT|$/,"3jb p@ [x?@V"Rb>|+.D1n8vbP–աՑX)<Ɲ!%*hQi1%Ckjz[u:dߗEaW IMY=&stXV6䶫l:Q4~+x"*isxX z'ZZxx,dI U3QdEf"Hi(֘sTQU(}jZuN隣wD-V̥T[3rX8"B:pAMG1઱ &#Ʀ ![`0N'^> J*NoҦx @ Q3X˚eRR4n%Q& JZHB _zK!0+|BJSW[e-daS9U"#<Ժ+44Nֻ8^Gރ۱GYFGlAf KwP8 UGQֈ?Pa%Rҳq%eO$> !lz\Bґ<Hhز{K2-I;k{d?@+5ȅt-.BļXUr*Б fܸ ?-mqAo#FG*ݡ~gya5p!Sz\Xj2c2.+J!zP\v]W-*<,):"<>J,ZBSUA{-wz k/5Xa{GܰZC@U^!1~W3[ކ A.ᣪ~c>jst\zWK]az+%a^9j0}n\j:|ZJR 18luzug 0ܟ^>j~Imc>~wޤ2\p}wnn6O|Ӹ(u{3뢸)~EpU҆n*ϕuq_k.5VeQmfS^߈맂F<>݉yspO7w|~΋n4_I'q2 XMtSzX <'WC}z{ ڋ P*KK0#Na%d !5N4VCl<^P~Nm:HV6+ Ly=f*Ei=.juw_L b)!%|T~ۊP{X@އ,G %Ͻeh^"`E#o9ʑ>Ғo8:3١EȊZJq z."? dEػ޶%WK(yk2Mĉw|&``4Mk"뷊lٱݲI398bWtWNh$*5&IGI X%UgOX&S 9H0VX$_-ǻ|XՠZb?V]xDcURɤ2UeRƦ[HcRT%UףOHaQ2lʪ|nLegYr< <$*LGM|h Y%x{QU)VqE84ZxZcU.{!Ǫ \׵2Tq L;ͱ@ƪ.0L~^]b|-X?/U8]q<(`Ӣ^bYS2/I&x_i\b8/*dp^ {:%eXxJV锔?9uÎnbf"—W G'(f+0с7xS҈D(dH=z18Miun9wn;M;`=hӨG-(l?b@D#$޴<>;g6)BX#˵uX}+iqȓnyiqO|!*t_^ro1ؒH!ˏ V^LfU}GD2*uo#Ongd"[oz}Iar^X̓l@%"EUu!~*.3Pl{ }g5(ʼk瞈}߷]專YoЉXIǴlq=>sP8bo1֌7Y*CeP(T*HUSV?KPrR-& tIP>šT(c:trdQEzvOweUZTլ]%XK<ߒb\/ڛGI%$ɰՂN-gKi-冔:5dSa ^)Ixκ0 Q xzIKW &PB1+wޅ?FxgjFgevr1 v1bCϓs–QTj:9n~.4 ~xRD&jY£ߎHȣ/٘dߛzlIw`:3)%h 7NQĭ G4giH|-gi Wlw]~]jVymI<zQ՗yy. ܢ$ڕ ,og<'"ʷy^QELx+7 8HRG.YkK.ʦ ,eb^ jx66]IN.RFA$ 8G?:$ꞡWT`t]J拚\3wU]ƥ5eƞtY2o#|V%lsQ\@%>-_9b_IqFuYPe')e,7BK{eM1}3w/Wd"t!`;ca4g*jJ𪪊Ӈ uQxT+\swUmY]*<^⥼صI`+򹍁ߗySe/@?ɳm}Nl'MZ'Kn>{@'7B`(P"m7'ɼԈCR`a }x{քVٶ܀eRlmoN?`bS}A¸e;.V ^HGgSBJfՔi 9!ZǴ*䵠ieJ}$lP${Tz?A1P$X~w^k^~Nʺi;!3vgur-zH[맓S|OcO8}k ߿k?H]YF@r՛T w㻤aQ7,5"=B4 /ǧ~ 5(&3w%Jy Y$|jXb"M)@,w :m˥G]Gufjva;=5"^y$.YGDSPx)4"-oЂLB\NxNCܕ)`|$|Ԏ7# '8G&8N A`Rcb" f~?i8Zb}c:>u94,קjs5ީ'Qل"Fp(GfHq%>eRd{Q/hZ`I֧ۦM{8IRx m~O% vڢ.5 Q| .艂 VX"r,:i+M.(,J`x$-ʭpJE\E@qlF[g3^`[ܚwH[AۅܳZKql+0RǙnJܹOȮ˯gy> b-S$TЙ/XF* 1x__?yzbBJLj54ThJ Fu D׵L ,ϮySxtE1v^\1dԇ̤ɚ:S( Beƥ y~qx0-xZ Iy1$y%!oRpD%x`nf V.Q۬>McXCpBDgfEG3 <ԝJ uHCj58@Ruq|v 5vI muh{\'Ds<^4Mߊh px@"6*~w^ѷ pu˙&` _`jqxZIϷUmi& Qf:%gYsV>aq౟$\bZCoAa=ӑ -W;ΧjSRB?n/z' 2m)߿o9,*J3@d9$h߼O:bdbBD">0#QvM (zfE{e!tW1锡h(y?doxč8e-+aN%4v F^h2vr{e>!p-ۯUVk6 /Ys U@ }jnEE!g˱㹻~:.٠l b6˃YCtĖM$Glй"'V]ojHV1x =*MZ]ݫ!ئl4xK9ޖÛkIcղeuu*%ks[`&)Q"wbfَo vyv=-3)_%D^l8|l 6: BOid奺`|ΞΑ4uM] кk4dcD7wnƨģk "[:VG"/z"OIF׵]Sz4cGw6Hu|h,2CLZ4v("RN' Q_`6 Vn! Om¥2Df!Y/EĪLiV:{Ɗc!:WR4CQzXXrHғjV6 F#lF=u@ChJC5Ě1tGƞn}xS\%J.uL]jK 8,LMqm+</uȣݣnu[kE3O`?W<*35 NJ꿫\mN8,ı q` #?XxX_(s!}Ӌuodƛ8[5 Ηf%9~g#T浊u?(P mt7܄3@٦Gzzxu+AN;O^.,"5 -_K=}uX?h@-^=xg"o@wRQ/c1}OGL+I!wAGwo3é]||k4ylA[_У (DU]x  ,<6<)nz"TܣKڳtd9cؘn^D5i$?e\:Hu~ u\ LZ|Z{hIw$lL%mj-iW}3e-{cUl_O+HPwXB]"U]U΋<׃(Dzke>3WQGnҒ4:۵?FAwh-}˵tg1@\3K׈a57ֳ[6k;la? P:u2 4I nn !`qp8M݈KG"{>;eGh:\AhW5ZR"!ܓyrݻʳk8@w (Cm P|Ni>J^GDwA((#`0@e1A 8~L(~L@$0<  g$!v8` }|tg~1! v:.`!x5>5@ a=X:2LIAmBwk`I`643,ɉRF5KYx<ۻpaޥwߞ nȮ14;/"ŒDӀ.2Wl2N}o=LOYt*Mu wPb"~q)~Zi ɗI4ceO. P?OȐU4̣(Ñc'O]4S<4Iuci8Zկ/T^a,a?gyRA<V WoMIg*Y]sI9} Ë ڷ[yp6+!PW_KU;`U6x-yEM8nY_IvL3@"~)Hĕ+  ۇã?;m7`1)V P8xSgkB0lH /,Voً0]!;bxW@yK 8 w#JNMH;ȖpNt' XۈF@={#0\֍t/Dh֒]F U,kgB&GfmF.n;0R#2P;s a(z eq(YfV .c?Ӎt@*.:dxAëmw"s 8#GDv#=t]2 ݅$$y ]n3 sP8 JL.dou+0!{?GPIV;u8 " :u }GPwVkg D`AqS;]PaI_ (׿\9rgng2ݿ PrnnXLڕxa4t _W`bZNI`ST!l[3.@^@Sk "bꅰS]#ln(ll"肁Fvwē :~u*cna#ҁ#\A]0 &D  %w"8&#+$a|Qk,L5\t%G8ݹ@! Tr(  &/_#L9Zʮ AGrk)\g)"NǺ ;sN2wp̢3w!!cfA T% N$餭$"<%cug# 0_ xT0$ţ O>%o[u3[z8B6 WhAOP$HQQ̪Ԏh<18Զ-`Q?D3/Kr7$h'O+R!a "3_`)#Erh]IBe8Q |N4J2* Q9gA}`9kދ%{"?M'V' <'RQEʳ5HDij}+A L~VHVi680׋D~ ĜrGyv OErՒ$1H2 # Аrfً́&\4Hk\83$j#\@NFg l 7%aufii"=0xu9S~\ OP'hpd]N׏1sfs8.Mh, A(w|\dAD:7%M0LQ Iy^+Crgеm/>}I-[y꯫>Z!I@X"xJg}4DܰE_SE5p`UGl P=mDG'䖑R) "p'=k؜yIaﲒs e)IE9#!j.Xu!e>P'b餕"`z! zryyr B:tH=7^ݨg E<0vSx IoId/6 A{QO}Px?{^/p^/ppxCKlM8pQ )'sɄtgפ+OqPӯL/ߧLѰy_Y!%,CR1M AޥEքc] *M=\Z ykM-kX`t"RQkU-_=~Cl*x'*hD]؏5,<$^`r+`E-~Y^" }9wjc_>h04O8<F|xzL'GR 桂}b\?GRMoΑǁONIQ>62=JxA/~li$Sq+]5Щ_g?ʞaϕtw-EGضGLN:>G^B1Aώ1rzC'p{O 8&2el4칔=TzC8 miEЉmAjS <i=Bߚq`l ['9梅6>(wTOP45k[ulْsMD3ԇk,ȽGo W-f:Eu`nŹf~($Qڀ뻁CC xTiO˅`8sZsО4ʵ~AT4%q)X],fµ}vߤ0/rqI8duòl?L!i8ᐅ%k6CqhNM`:N GeeY 6ZPXt$ }}\Zic>LB,oKGk_V e[ʶ'F({&lBu)ۦO3:ثʶU>kwgw6W[˷F*&| B*b B$TlAJڻj?P{ BUB-u'ٍP癄:[nA=n$݂PoPo B v#{&[oO3 4X%4؂`{B Ih%]RmaF4P缸ˉnVBPg41)*/p:Ht(YU8b* f7|FxJc1oD[G{eZt"-R}.F3$|dw>;8{:?F.FYsG3 YY> b3ؠrrFG1rAzxvxr)=8_>1UShz5ec`EA`Z? )q?q*MYc;ʛ.h0ݲ_]t8-rXMCjW_HaUabvS=t=bKu2) #`("*D"hhR?b+N{<$Uq_0-(,Oo߿?0 {AZH H}_W_XZ> SLT+KZy^s+u>*8~#pwִH76k|<~shS:M:Ih'J84wiY<0l5'leV7[%1Q_PQ oߦU&ox.)< dB0Ϡ_(NqVui<*׫u)Scy.n ;C'ڍ grWzmC!H107UÂ8YT.ͲR9F4ư{5_ShZ{;64"9˒m6`ei. &/&xpBa\#16~/Ւ>hq=F _O3 "&> *tLLje鼰W6{!gi49X^U({ "ҬqfX,e~CezaԝT ,N-YDEp?JI_d\*UĜFˌ OZԷDyΦ8(n;8()k&Wk6z&! f!MwЂ{5Bj W/d)aNd# 3+ߠZG3 ~y +K,3%_TJYRoZT.gr .~Ѱf h=K+u@-h]jAMDdrew^A/X&**:u6KUz mdY;g#!]R+XNn>:b. W r x}4#zLV۳͂rX14!5T8dYJT`P-|ew pVMJKunN3r`#(KZH$ /E1U|'oV3ZЇpW=U` f4]>2f լ'nHkQeFI7Sqi"-'OVUfiq']~ n3F5Ͱ-vJ۞DS#H&4_]YibIK 7Nq ;p)k̦-cu;OV@clj 㳍hk2#r|`ѾC߸jw IQSjhyk57Dji$) zd8/+u~nU0͕]eWjHqtL.L3ʑ4)~D!5kAMsj+@^Z} Qb"7ڛ(`}w2U$V~p =#wai f /oˍ11%Ff@3oWAR/k?QW9-&rM1jQ?u"iH#)Eyg7"Wڶ/.4vyOݲ<|em|aw$0Ħ)D}V޷0aQϚO޵y,bCz@qEo9 UfD IKξ$PFV>ݹ?:9}8B|K^})9Igzfj~^9f4gkӔ_vrgtj`}3ݡu}8 6ş"g{(_;c\ڟ ju}ҁ?{ɳۨ9n~O`Kxn'l&X涗WsҥiayzY4zzy5 u:ӗ9pK+){Otg47QS7zc=w9gb8|tCHhzwFO?S_zo9?zˏSe@>^]5lkg-v~ͣ^g)x]ur=/^ݭW:oz_cZ|7)a-g ֙?Yk~5M ay5G|՟_*k]ˤ' ݚXZ>oO}!xo=2ge ER9t=Wh8ޅmc&ܢYhqh$'~bI3͜ Ant)H;xP`f{ ɅƸj+`6 s[{Ʊk]*PK+D!E)TbJ?0Pj@K) a VtѸ2'/W"ɢøeP ]SâRbqs.) E҉>[G1+^B1vmXGД` b-GZa(bMen[ >F9[5e RtѤ,*Cj҆UHweHeД^!cm0-+Z}>ma]"0A1,0%{m?޸8*6G%5kBP0x ځ" hJ\A=I|C/*Fa7ib+/*P~9ڤ@9b[x[ol5աW,U j0B@V$bâtU1uz\9C+DU4dW8H]"腮<=W8H42Z=)h>´ة{X4cLĠaDPcLF L&I=z6ll[pePbM*ThOŢ)6ю0ERīcãfeеe0pN ϞזDP:J7qeHRFN^1,gw5Es)NuT3ڌK#TQ(|iV+H DPASqmI'8nE~\v]V$e`]ͽ й6J%*b1eHUqGo1-Wsl\rG=:ef(FF4eo 5Jjm1ݭ\86XɓR%NDV4jc@(R/v2 ntS3_‹%x5)1X }_S#5%Hx)h$_I`r&Up;-QZ:HW*95/SGbc7iVՀ­q[,3|JbA҉+S5FIVi qOݎu#q_NO Q'"S 9&iy$h[yG:vb allBgVЁcQq\$Y$1E5eiCO$4M!åI0eH:,5oHfP]Df![ ~ǚhN,NpspG7V0ΡsXVk4EE )`m!,i끤`Ǭ5E.3:nupouD#:caVxt,-6G?F[*^`΁RLCphF焓e bU;݀sv1N6_X-hL_T47ڛ!f gOǹ]CgSeI2`й a .Q=kӾs7 69t%Kj!.ސI'8?=zѳa9|I`ԇv4ʚ hC GS}ۇ.8zyQ"06u ?]n/G8=αB\T[ȬB yI)vj4ʙn Ggxɏw ~,::Wg D fh_A%L,A)Ӄ}mj2Mry.c;h"s 3*J'j}oW"Ef΢? kAE͏ H!f-0g1D mLp/YMFNνJ\YtB?:ܘSGb^ ɋk}`2? vzEw1r|n#jZI)~\5~Bo|崏&y=y6rFal.VCmt9ҍ=ޥj% XsJ`0j5dS$}rIQq;ǜxi0W+R(bm? c egmkH/TY1ehPn>tqp_Xʖ<3qFA̦צyN#E)u>'5)ޅ>Cޥ1U"15A957Qv0khFs[g)J!^!s[ܸĮ\칼G(@:q*KdZ-?;U B&N$8.7՘8Zw[n0٨iwIoY/}/E/܍Nǹ~;ُqMww9/j x3~Jd)~Ԥ !Tg,G4RbX06և9`c`0zrwrbH;t݇Zn^c hi; ]x)I;AtӰJDcJL8zitPHyYqv}aj[}XlC-ޟ9 +E:=mJ} \Cht@A)Cؼ_.!KyU֚P(P)Q%_KFJM-t\ӐcyQwr=Y?5Ey<ƝG<<{hy{.u $8 뗱 ]T>9Dq|P68ۼ%39xZ$&|M_mzPwQ[e$ 㥵pX,Δ2$sp8fȳA87eKMŢH:9Ʊ3t==(\m4꾻(6MܩU ALw`~ަrz\.}tϊB:-l!>.}fV씲ӟ&6R NDy x;%ty}E 4V 锖D}ʪ iI34L.:wŌhb%((oF1D_執8m=6*Fńaxb.1YC4_t_cj _B19ڐL!,mv7WfjpW޹w^y;?2-,jpI]8zmLy 5Ws,Rvqstt_SxD#`@אZ,ԩfz<[6XɓRexПs7qN5&H:1՘;x}ùT*g6:"ou I[lLJ/Clp¹ތ6*wM#<{2tбqS좖(C\V=s'Lg7 et:Hͥ%yʆnVKX/}yߖ]~y`\<#fî>.8:{Qݙ S#lRJ4E1R ^k ]G)I47 n.ڄY*xoii Lt[RMT oעvsiքѴ wϾgH."zN mF gU)]H:;Q~GzCb/"0oTgwgq eD$uzLxz ٭HY&vr@>جdJހp&C*~&I_S+ـ9>.w= ]TqjVNz`N]p45=+Q&^-O,s`gHi=qSTP_ƑLB e-V\jʹsWgʞZM28lNY\@ダoǍJVF" M`։.8ڹ97*Zn2.E:88yxxޱ2׹UI?ZF)w) ط4h.s;^jsc[$.bxFCG ~8(]|>>joJsVB ۍ'˹ ǠBY$tYʒ<➻v%u)ھYF'V#ulaM7=RsQ 6fg$*2vB9zmDE!/pIp=i3q.~Jg-[jmTۺ7I7xtĘ>MI}Q#/mݺv 6}$el->ԜRvf)N *~^.{F[n*{}QbLJR-nQ'dGpu ed/~0`<\v0N`"KZ43"VKG[[&ůE|I  yfNIw|%;AqC{FqWr%}j(>][S>zڟL3z*J|I[kGs`"ԟɝmME1KHI E3qOQk&Yi> OGjٳ2kץNGYOZgG0F哽Sg0p86Bav]Z?a調 nUkz5Xq {DǝqĞaL +U R'@I些(i ۝p(L עQ$ObYeGWҽ71"MypZFR׼ilP]chYc\!d70oܝSeK`0Q ˯kTEqzcƓ:+تT-9bf`.z)a'ޏ~|aNׄJdbթFb*h"xO3ZbuQpy5v`DO*Fi7~LN!.D8Bp8ڠK5O򛫙Xhk. ^OSFx|We V,WހM+φ`r'&~|C'W7H=g M`UCEFQت 4$P;(Fbtg)lY-|L鯇f+sϳwcxc:ĕwo7Eiˆ eB$J%kfڏ`7?/NTW}d2{r'4#>~vM8B_Aw c`0N/q`5%T`DN=(mxK4itd>~ 1WϿ Lzpx8L^݃Z5%L:g3JDkv4ܜ/)R~fo:7T [|Y$ͪ$_P,n0hgp5''QT[+1\,Ϋo^RtE"3lIp,Q=]579c^0-zAL1ƫf(R 䲢_8C5uaL*>g-uO{,ǀc?n6W͡z9̬831% :#ǦQn-k@۠K]0 ]GΕȍJ2k1ݽo$+ 9y]}s tF*^$t1 Qř3&a133Z302`7W_(梘oĬ::~4~VwZ_T|H8Fpuq;~k|(0\\}t4NuvUӾ)r/ XaƆCrB"kGNRN\/nٿ:Ts}UZmX85XйntSOMӇ( t&b =oַDZj!ǫ</((*z/?jR_җ/R6 \lh`4xQWV UeMlsBiڠZV)FmǯՄO.&vXPZ#c}^b{zQi^bp:9(oZ}4CA  uϖ ;'rUxi@Տp]p Q3*s/_ٿ|;jO1'`DiZJapʬVJ|vC6_T<ְ k*{Ř/ݱKZ`@/{hŻz2+ϨC"]1I[c9[C}Ur &ٶPgXKn68A cǶ|bUŮu^d-]hbGl[FzVb,ϥ%!+muٖm6J=1Ke dH%+ K]׏(yvp 9## ;* T\Wݓ\UOKO^j.۶1Vk̀Es8BfE>xM&1#㏏gsyP|M׭@`ړR,jL۸,F:w|l%e1(Fy0/ށiZs%X4-=~G&^2eS_*'mjIp[ {Pĥ]sF@W wO=ޗ@#{CAY8Qynw/ .39ٜc{DlY_655ϒJ+k@ 3yiz>cbFuOsv=A*SbGO%MGEhM|w6JiGO%E`0,$UOFUG[sn!&Q6Yb8.X=0MN.^~J RH;wv/bӪbP*$0&Tb$!)gK\@Hq" rseᄕB/Q+oNB6pwZ67aπÌd 4E פ:fq t]kK:.EZW-uRq&?*槑5݆y1O/"O)V+.fӕY1ƩD5MS*LAx1uhЙ1_r3+NDUGYʓdywwt 㴇hK~շs;o\?;TUQ'-5jۭ&Ki14:7p8J~UİZY[QtVXuP<6 7oa(7M@2~&i ]Kf}qk|"tMjSڙt[o|#Bѫ;.˱{}Lylî߃B4K١ӛ)O::^L\~7VF׷kjZ^_m*Xi+0{OGwFJJ.Ó&Wҧ+$yvS j$aSVh+TL E&@[#[c85=\,&E*ʙ&aLR``g @y 4b3ed!UNIm4Z$!I_9YT1N EoELBD%xj 4a ck-q &s+6 Y!9oxā_Gkv<ZK6\ӄs1y;myrPC-mZЖ A8T\~t b]뎒؎ݦĮ0P ;./\I"Vz;ÕTz+ִlg9Ih˔1abb2Ä*PDQbR.qU 8!Ql87Jgi_U)dXU54˯劳&N9OcPP3FPi&\QjT0U8rS1eQRu!?t|S9q BϽ 3k6FeڒڒWv ݩ;AԲU#\hJv-i4(-u״J۔Qm6@2mcJ_¯Vwb~މSs%) ûtzBS'f.JQ;ibP"pnQi0R|(M8KNab6. >!!!Mnb'CQ \j6F{^*o*t6$B'űHDLB**EVSDI8#Ʀ|PڣaQ ixı\pjh,{ 5pX*0em66*p5!-.F F9 '_14?J՛/f2V&C{~1 GƦ~Fr *4`tpjT@+$0Rסb:U $-?h!\Fۢld&hZW49)_důM;m2 ݙ֘6*hϮK-Qez4~NH5=Nn4I^iiůYJx+s{d^tqܒohδ| =s{7.h;(h*owa+?th {yd #߼N4`AL _\8rd l M}Fwr=@۾h[Pl8mLǛAf~ hDyj !ςf`9GbEٳA v;DؙTTIuۡ˚?YfHIiZgn~U׿)F>zfޥ'M\2sԊt8ƆS!0 ܃ogn < o̓{P4uZijP8yǤJ۪y,L\,֔&=;\Cb2Ww[H !ksXء;ߐ7D #%c$tRfi:~9eS#k~ Y˜f&N9NHm$sRqw_zԲ^w7J")J[>[X tOL_?;_3@rrK W$ty,'0d čo!1:|2N:øk%КgP? 5ua?Upg,u .Bpè8zwi_I)rV~Ԇ Vtw1dw!aҨ! %qݳ Dq{vh<'nCے{}JXݡ{Z1jU[+ /̞ ƩIE~zU@jVMYW-m<|Eiג%JZR/`BrDXGЦLKHI1TpmC3*MRn6N5ljVsƕ{2/#~|rfYN'Dkd J4Zތ^"VD}_1'}#BbbeXcw=4vD7C\~,8P f3MqNx ZWս ggY0IeyNL`~`MjSKW@}2uw6 k}gnS0Wy0Vl!bb;4Q/1r6R>^$QQEj" k{&QatDݿЋz;^V.CnyAs۶ Qݿol]iW¡X8acB־S ^MɪʾvUlj%az# a#pjn ' ʹj,a`+ʻIŗlc?+ JAg)hqW@o.!{;@gU֧4/fHn8&*+Fcyv3/tDSmcl{]Ainkd$D iuTϹ(pl8HN,' ; Ќb^sYނa0v߂0Z*^:N4ӏT:so_; @>G{Ծߗ罰;zd_kCͯ փ-K\K,W(6ПQ<{kYN(/2t|:S.7 WrXIq zغ f *PKhiX{ԫq ~Eݔ9$]0 3PZS8瑴pUB2ֿZq{J eꄑ!:*ҮסȂNJłEڙZ[܃AjREvF_ĔGJh.<Ъ ˈaL]GLR^F9=_ 2Bt=Lf8Y-ؽ ^~]b9!*a rύ~`dՁ+Cc+>=g{bUoH0\V>Ps7FYS2Fie ލ/KnZf\B]éKo(f8w05KRlׯ7 fVZ03+"cR,a| ibK/c;yfi"bd#pEku^[n.{)z pI5p4 Cn 1Qb`Hv4ӂD$qjTbeDuXhUͫ 1* |J`43͠N|3Q% | rg}xm\~t^ -.qqSΊaŵ,Tp-^еf//K,x-9^K?Ysh^Ja/kp=](Xm*Gy̦ei6toW ݜ]Hw/e:fNnF'RC).dP1%?iLb@i0G|sd%z[ F({=F[hqj{Zg2#xxxx4BP}hsuڔ< ʳSK~JmAS`|q1qJ! (G6NS*EIHjҠl';Ǥgj绠a =P0~wEIXĶ31cea6Ój2qMGצ j={z<,!v٩kShڲ٩`{$Bۉj#+R}?))YjM$29O2Y86ё33zȧjy1PAŵ݉HW+N bhԤ %#z% ݄D?Cy5|"ԗo nt ѴXkˑ'-s6$Z2cّggT1PőOE+bhmD3`qQF=?d!A# r_ ` )S"'hEMTK 8CуG>r]Nwnm^UvwyAxJQor:6_mQ=<^8|C'ŹJ4K")o}5#A2J,s"0K޹w|*̜W3E$5 W2Ri'ӊPȧ*YIP7i >i) I#9|Ngk%J&5o6EmT{cB~!W>.j=$A$hʄ yN|-ҦJP3V=qSѷ 9tQIEo rb!CĨyAV x?%p^gGoۘvT5y:Z{9(ḀMC@v2EPy*|*uEɚ9Z(#)A 7S;<ڿNDOE׹ٴ/(v NImIa .ZҤQ.яb'BXAn_ ADF%h9S,kG0?y3|*B֤87JvsP4%T(x mԀ]%6k\/߿$HfX8֛RHoF<WȧBk xW"4`5$y 2F׼6I툉>.xCOוO9Pd N WNYC1'gIC<7i(r`\\w9CBbl= |k__X] !fEL8eJq7;`W(h+U:jYigO{#tZM"ƶs´pBy͓ `dt:3PS!f6ȓ\sЍF7p$HX{ްX8}g1rIܸUrFXm%.Fm$<|*\Ts@1{ց[RBP> 9p1oPSA-x@rO7e\ iP糀Oj8L%,8i8O?g E|*ZVES t0őOW?E~{qG 3MpFc5)DȀ.я'B O5P5FE/ Q2*.Th;ٌ3k&˥"94`Ypp$dtE>r]08y}$?z7[V oj-ȨY.719^b|@L 䀌!M,׎-:9}LC*ԤlAy(:-{i9ϧK>qNj*Qq䛧1!m2OdA>Fɀo>(xkW$DsfƓt`TT~:pmMU9t"pu2E~ZnFovG<>-w*dy|<6 4Xra Þ#U18o| B+mDwl_àȧ+EMAfU·*ȿUXYW7DA 60JsJk^|ðϞ &3T>9r-8rZ]F*k(#-gyX>"Li9 % ˵oBo pJ@s;6Q>5#x\Ȁ6dNTgZu@FCO\P]TGCb-F<~2Ѿ=DOEWC;|_J.V\8E3Շ<\g #MЖH qO3^Ƌ*ž|*gE&9Wjy_V><4ZoyO ,UHyhkfho2dtI$T9+kMGJoW\鯥Pj: vo}˾l-U=e~x7}넧eȉfA C]c,!%ID3cGn TT IԆv{ 2km(Wpɀ5|t><=rduP+C^*l 7.&;]6 / lQ?z|+5L|Ugc;v0az&Rs ϳҭc5GAM6+19̷Q5j ktRMh5L76OqP Df-85 Ya5 3M 3xϚ6R8D9]69ğۡG}:38-^_KnO{qqd~ & )ZoQ@9tO-"xh-g[(e >/=wZboEֳvnF>\lMoXvA?pE\W/[4.v5RV#[Eyg|o9;TWs{z[]Bڿ/|a~-A] LGE@?,tz 8rɰ=PFz轓Ќ(~~;.Up4HPK,_ׅfGpU_+ǽd߶G(P?.~^fOi|źV^ziZ,׋X-wŦY-h&hHlqGtd<˵Ԋ7C< EOuo VKg^3|pͮΞBƳ'!Z0]?ijvZw^[ˍ,F eϺ C\.LMJ%’m.Bz"6rhu|u:RV#[E16uEPQV.ns8q# 9|ˍ׆ɅV_èf(uŸ뜊x9E>_7_߸oÈH<2f3 rk`mCnp\DT):fQv>0(d<|C>eدCTTHI<Ŷ"sfMњ iagڣȧ|~++y(!OJo"2mkՅ$߫ |?E>;Tvc92ze{A\e 6nj=S ÷[*R=-<xg}lg.f6o5È7439̤vPu:x4pp _-n:Smj/Yﵨ[=_ hGD1b2{ꈆMLdҠnS:ʓ,G,|"UuXDgq! p¥6Q MB]gTOcFjK]^S$r)D\;$YB`'B-9.n{Z7o'`"q`HtP~laT3G=`!cL3ә5cAڌ>!ʂJieۯc1 !{þ`W4eIñ1Y+4  kVy"-c9op= {Dž! /NՈt2fn>ntQ>uFed.t\fkZ`4ݙj9>u%t=QƟ L=(AڔӶSrwMBy+m桾mDOU3oQ#4;{jh q?{:e@F=[=KR#* jC״wӇ$@.vuGu(*.Lc@dB Fy~LU{{G ^nu.P:-ʭ.{5 뚕.|6+ˈFqȨY]ql_Zm>e#%p$'KھqDx o ,($~ۘAm 2H|BP>K d@r 8r*&m~1W!!@\d1PxlK g|:_$.@Oy2ɭAtsz;u[5t4"iIچ&&X0RMZ]G?WE>r7vF Lp/]FvHjwcڥ#Уȧiar^19yVGlg^x)@TmvO$5C皡ȧB^p+lPT6n0jYǕw?7*%IR2)h!יluKM_F\`L‌j4zGu+ "A֋Q6+$.4D$XP"vl9d0-m.M4KYԹ7wH3frnv0tlվ`}!?S .6q5)V` mNi*"G=y4<6'*XIigQ/&\D>`xp"(xY;xhZ]i VzwpW{uz4`Zڿ3%pDPYqA8;T(a,xYˇTS7v 沿ĭ7x,"OSe8M%J$`aӘjgFsc#rL [dzR,!M$CZ$FcCq"?S!rh 0񱮳E/S%% 2YE42tq4PJ|d}RJ%gy{tn tsMѢn/UU3t3E~YߞQaYb,xX"ㅍqØ>$>=7qeT-VXs i ߲p2]e 0>`8<'ny R(S9F QƘx?)crdk_DD+eN@ " Jf}xKʝLt1gZG:xxI6xI(sE΃OE$*':"ƣ%CXeQ~bښG-rDM+veLNeV2!`ίyspW@:~ %o&ܒy@LpAi/=ΰ0~s}f <=YԒzepi@ZٗzŽu`-3Z:|%+1D&dz-*tBxۢ0ك40[TMZ;mSv>+J7>Wc9#ӤO t)KSpa#GD );̈́Ǯ`grZdd؇$qAٱ´ ,hꄺ(We,˭VZ!T 1y=xs=Eu}8"}L=Lt.Gs!TmNXmƄHHE:y jք0:qkrcᅓ8wT*(>Mê yRU1ƤH .*$4"\L,1}B-gٜ 3^MvX~۩)l/q%Ěe*"?s+S\-CI<AK_Mdub σzz͵eYb:ft&&=a z8܍~Oj@M[DYQK훣r?U.Wy2FP=:o~.sAR2m|X~AN@_+}/a\\>&;GScC4Dj9O]"Z8RLd)Mc&Ui>eg;^en; xO#} Il>0e-ʏɤUoܨXXQF&7@C hir?ޔۻqHT.Qp}N+t.Z>&Q5~vp2?"Qyyz𫵉"˪WI#pyyBNUuPEܥט2D(E$畓TjI2&8qg]ӱ3kg"ӱ5c,"N=i:xhE1d,ѪWY` FTDDʂacAR8w6kأhVkbEc5Zu\.%gW8Iy@kެ~UChՍh4UDpEfI6iA]yFJq̃$7@a~#fRu( n#?P@Tހ0ĝ }Y,U~JN=Q!GB1  dET\&r&{z̷fšT;1btwS Rr7~_: CY9t=qB|j=b^\~|TUc:C!1X2U%/7/G``h# m<m"0b?3I%GnWˁ O瑌% }h["4'15??Cٰ_w*o3Y|ѱ4v)lIh8&pQMi}{[!Z pmxT),5%JKN8%LH{~iq !p{ +x}\Qo5b)rՍkX Mz ]|SQo4(ѪWhrȈb6LݳCFġ/M_B~ #-!Jd9Ҕ tn WwviK@wjI"Z7'TZoP]zmO5!o:d/܅u>9ß}\myVԛsp98kyVݸB#R0moU!Ɉ% 7dds o"b`"i k}<4GϿzD qbq'UŢ5ZQq~}AhՍhT(d<a FAncc 6pT8$)c%ۏ~_R􌬨7)~۔@ 1vFVݸFuD(t1x(  WDySϿmFhFQʡ7e*熤d9K`$,c8lZ76nh q֋d4 wRK̙nͭ^CaQPKO(i@HPY?,J#+TNdn4#F(ʒT Y }:†.@in>qeaܖA![+[ӌ`sI5)zڠoR&CRРDHC.?&ut<!a8n% !Q*Tdyv11 Q @*2*KN=l~BzTԛ[U*{Qu &22*hJhE<( ‚Zu{x%u5ѨPO?!(ܽUc(!!PFbsHfs$ca2@w\>jJ8uiFЧb4Geh:ύˑ/G.GO|Y R3hwTNi;ʟE4Gۆ#$|XZdo&QE`e7w~ۻf 1TwDnEBE1k[{SE9dZ}sv. c;/6MҜWp ܾr!fyZ/^h5,Xˤ-Ib)QXY))qJDM6JrͰ6$ Ȼ,Wv}Mh[(Lel>/g(F 827+&L4qpAjfC(}vܜt_e곤oi}6Cd3[۶`ۓ;:f]2Ob{=jw޼DűnV)h\Q'%Z?ȕ+7yu[g!a.k5:c@R{?b]ևS;,Z֫uGA^U;ydL+k!lzq]q9̀_g:*~9=߿Йo )=hf QY9{]]ᮻS޾Su_0`DQ!g?/tS :k%zb]* d>r z8vG ^IVSӗ@55%TԤ&:+$\4R9ˈBeqYu S6>\"0{DP(%RiK^@y'E0ʸ>`zJ54f* )Ƥ!$ʳ̤ҋ-F{i|C Ac_K=5_w`[Q1 EG8EA~L:zq4&hi"$Vc+=NGiy>m[iScD=KlȓG,YB9N(3&GN#fv)ˇНZoP +)НR),4KM!A,;ऌ;Qt M쾊Tsӓz (x5I c,s m4$53ro2G¸\b&HΟC73(ucpܠbcL>z PNĺ-<څ/eR.\X I 9RZČ~kPC9M/SnA!owYنX2cX'L#vXaYƼ yN O SQ6/*/??5:czSF@QhG{1.D/|UZrI($pG8PZƿG` bi2;Y}\:_j:2!T,+Ph)^Wjw{FXLBNA#j10+2t `!sESx4e`2  b4,B-(A.1L6S)#F>zН0a5hLAQȈUA݄H- SnzU-\gXc66OV#D0qhP+[M-lG^.y! A* 'b5?<(Ek%/&V2{7+T2rm&9̥R9"g`aII?B&4Z7_9Ҩ2+B7H Fy"AK Ju⾩jt,kAq=Lb)6Rc,GJi(SQR28Ktd.vg f`|+2,O!ܮa8B8z4(-o%HH; <!5aSAqOH8l' 10 T'G h=ThMΨ |ᖊ|uƁLB< rÐ&#{7XYV5s釐A>㹟8>ݳ'AIB2|ww :zB8SBPTm{Pg!3l)IbԄT1RYJhG6[% )x`` 1JxTty;4c%,8+Yl} s־c`Nq4&9<[Mz]Y4(1`r G(}N:B͡quѮdwJk{ 1"ZfYJ"#kRڊh,Ҩ& X@(żOfwz$`AJ;b;ۓ3X1!( 42VxAYyxwWvY ـ; y ="[ -QBH+g %f1:EεJr>t_ksx6wǼ#s|+ViĜⅈCg^;T f2?X}ˊ ~Bw8 8 PMEc M+1WԤQⵠiՅB赝h(fXH !DRI\4ί X&9RL,P$Fho˪sZm]1C11zG%B(x"mgdeܙ} GxJ/k2[N_stn3$gq%j mm"҈1}:4~BG/7$gL" c9hn#V&$U͌<4?Onm;s0`.@mј8)^HWյS9X`c0ZǙz8 5YDcQQxN6ۼz`S8rYo$y;`㤷W'@Fd"">ZD$(R!&@Tic²\CoH'!h|(P4G v1-ttGc4ihdYfR1}/d!@|w\X:i^w~f&lᄕ~-,ObxG(x ''x&c)&YfZ?rͥ;l@*Wgmcؠ1KED՟rV3BN~g|[i*4cQ A*O(Wd"Oڮ+hqhij1}Z8J/`]c `;QMm 꺜S17 EZtrk7<{}n_'#[? 4;>XoS@G8SS$Cr@I-s%ɦ\j\N,>qgIe&3Sٴn 7 `(hL¦Be wz^.(w^ܻJG{@q<3Qy˦dO>t l&8I/z zqj&r`w].;Gg~“^KklZ jgksM}4~vnYm>>ۑr? wie*@iZe瞸U;?~@]?9{ ؏ '<^gfߛ7}BW.7ahf]>~EJ^@y}3[0f;\~4x ް=`(G__oήce7clf0I p ܾr!f%k/VuՍ&\\cr*L<ؒD).DpMtdXf8Ժ7amH> 2?~ߣHwS#[uwoBBa*gy92{6Pc&PƄp©Iē\x=qh})G.,9u >eܓ;ONz[ӞA+`:nK1w}'ܿ忹aZEm= &s6&~uʂ@w!$=Gb*hfT3CF8׹Wo4[>4 /YX' ?_ŬD>,g-Ӛ `1$1? >ipDHM/;R/{gZ3L).~K~>: Vx jL!"Q.k9NpNMJ1GўÒ=gH2+j _[Go^EJ鱭vgvt9 RjnH y\t.YCȵEamyl΀)":=zBNsGl | P TۃskHd"锈 2Hъfir2'0k7m9*vn7_ߣGXD[v$ګv{~/i_[`/鏰c$W:}Z!\>ǎ(E8ާЎg#L{gy!E` ET7MI;37mM[.֖-?8N\G-Cz=7\W}QxT5mxӓMS+:?=54}x@ŋ6=3UW/Ճ[66Š)f:YYsMxǑw(4uݕ_?1 ^~0fc-V+XJy7VG1<" @Kc6%, \ -Ǜ"c al:%5{M Qhض(\ƿ10?FÁD 3,"@yWNݯ4ڂÌl mxmS*+ % R+2'B9,.vmU +O)yד0MӅд^GyMfylzz `*JA\:\"IӐk)Qi &E tL>@S.b0Na0\ =4ڂ#v2_^;;qeKF+`aR3`[z4lci2[ݧTc$kWA fu%xdxnQG%e0Bxgoĉ0,[QG0&flx.N4ڂ#1q%.S*lw`D1Vzu*z*0&&/C9` 㴲FFo xӭ"˘W*\҄!F`ill(\63y,؁!&Pѓ",c. E_Wv7JsXFE,p PDchX}ܭ_O"vijq\N[RRǣew-E."/ӞR(R(6 .n9(hG!7 AB>([Hk!0<7\8ȵቧٷ>_[d:;b+ i*$-"׍eDq2`:AHۏ0C18HN*:VrPԯQt1n5:}h c-9,i){drd fv_')36MĮcB8)wn TߦQH <L:$GbBub9,`+s?&+Co BXYʡ{o WR,$(I93|J0 FV3lDuhh%uzhTp ٫0QY}ΐgr53&I -8p#=[pbWu'i-<;|cTAcq[CJa@ m2\zmQ`EWvѭ>Z!>|.C g7D?] (qe `iO(m 녍ϗЖF[p>鸰"fo RYRs+*W XrE#*" PGX#oQ=/zf 8E``6ڢd Nތs8_~>C A7w, x_o~l՟WEn+oPP"ݜ>1ZyXRmqF"SV*XeT V;Nϧ4Ɖu#I:29C؋ ٥ 6iW-SmRT"|?A1B5+70?~cd1Y}'c7A)1"]++ e"uVd1`<"DE?Ҿ$EMgCN31"XxRpFse> qn1H-+|ƈTH>=w[TL' R-j2Z<-S"6v`$h,AbT#B`Dm !qb0%6Mdaw|^\YV(1.~z@llC-8W2_Kv;jc O_E ¥d|`Ԃҕ$H-i% Bf^CN>El>#r΋eh wAuJh Ng( 7Lc%aU2ED @pF TToi4 hXQ~uT x8FON - S)m-Ӊ %8=)_W< 0ʦh 61T\mH((•cYBsU\tV%Ť@MNRMjW<{_&kڱD!^h xP MӶP]/mgyh 4B񘧪r8`,{h NzGuKZ+nGte@ v`R"=[ p4S֍Z-iYY)k`֏<-{h4*#ZvG12qG98:dH]d g-fᯓ2YԅeI0W\\,Qu/7Sv0G?,q5<P{TK9`Us (W(n>^ +<0 Yԅ"jWD3/3/C18 ;BJ͐yUl!_jH,B!38e9},\ԃ [F :|ˀ E8{h4G54_TyG>^Tn^9h#S?j}R EzhE؏gB (UC189cQM2]&=Qhn&J"YĭI T@XnVP* ݳrXUv#^vlIdy>|cH*T>$*) zkJ&jE``@%-#Z2lW2Z  '"u؁tz瓴F0>Ў{Hu:=tOhaO.Ov1+W?"rZ˹ßNr~T|o_#*KPw2oMZX{ц?XONfqPyog0N>wK7`ߗ>y<W[:fwBSZU͏ˇa(c[~!?/NW(]wjCl>ӆW>_ߣŋ˫̤e{|C7Xe^Y=,e³FWs@K] :}iMmDĒ\T3&H e0"Ntǹ"aQr2:\`°06Mf,?˳t<#ON)ܜxrcWuLa_<5I:W7fGF>e]bN:e^[ml3إco,d6f'ovֿe ]yo#7* lZݤC3  yO[YR$ٙI%KV[nU]YDsl\ӕ]F0՟V?#y:^PlǓލz9~{ӛh_FO׃𭣫OWӛ\xYu:vaG  D-^wP݁GT -e]7}n7=tE!Sޛ~,܊m0FleD7NG)&E^L 1ĆljGݨp7j 2$mC5 b\5S>^w#ki,G6r@5&ēQrUp~A0b{m&K~FL@Hk krv?F\kL)o x@M*/jԩ5s)Mg;)y43O@;%Xg^/z\r)E I\yxLrsYșE*7(@@ˈ)LE-RgZIi/`Q[ʭӼB"1F97 ۣ+-G%V{pإթ)k?~fSUcv T4mt_M X(L92她D#Ro^qB As;0*s9좋$QXS"&: 9t֖$,~F<0v~V"l}i3櫇rO0I_jm3Y*휒rQtRToٳ웦 \Ck\T]鍦Nr\3DNdYhCr2M!zX<r \[w(pnb `[,6XI)m ?)DhH()Jgvs'#wáx0>{0tDeO8l+#DJiE5&z$Ąܝ_~Fql8é`DVJe*+y"*c>f+X<?wQVg/5șt&}P培 ,_ 9G{y o+h fn) %i rk9e.8{E\)/euޖ fE\ubKk|v!jZYU%Ɲ.uv]R^}vy[;Uj LpD8?0. qaD҂xєRi=v77tH*哑E<&8*Ϋ*!񼠆mlCSc0kIJjG;Q`J`aWt4t4ɐ7_ &' PπLuz*1[P Vj*冚ή%hHqFd}mm=KnS_)eTw;Kc%Tw閮eģp[ݱw+y9 {V UmBWWWG$VJ8_He R,n]V7cт-A7+`f /qiP4ђAanĪ%޻Hݰ >΍GJ24dË/ i@5;ؾ~&tJpزZʇ1A1h_, XG FB] vLp,Duh!3dJ[CőBIo]p8G#n5ƿ~Vf7Wd.VLx.,u|٭k ceqi p 75a^E^ͩ)LπQ<1{sҭUwWC#+oNvqJuA nH)gFBPP _(لrx4ƧO)#G쨲N:, KC8cZ2Wb-΂S Gat ӺhRN F^ A 8isS(dA1Ñ%DK3NI`&!K+_ K~6b)O;VYgF6 oH/J&l2p&ׂY {^qw9<0C % [a6r[(rzSԁnO-efc} ސ}ב EQ+̵&2THcm$Rq$PIa"Xi65legd}$+.O|5yIJ9+%JR+%J}D_i''@bzKuˣ}Hue0/P֛?n%:β`'>gdN\t |=$)XR&I,eKR&I,e{y@S%zutB< 3H,&4 x;Џqei㗓}0a;Páco;^mkDj{jT+cWrGv8A;aZ]\ȇ`%2 ΋s"c}`SƘDv-. uK#a,{Lr8FT6yU0P88ۉy8ݨf&[ ll Xt .}_Z\8рHnq9wBۻxeG,;^5et0+ZB(..Yjh)X3F\q((U^:@"An1z,BUzR I2xTbY Y% A4`'1c)P ~0?PFkZG`^y;Txk\R#`ʕ T20C9'|$e(` Naz ;a.rhS iR%Tm}A$impm{ aX9<i_uo4r+XfW0FO7tYE 4v>|>2gxנ,Ey'ƂrY \n~\ نHUE2c,%Kd$Pbj;z2W3\u|_kR w`mF{ᅍۇˉ/=_;RyvK?λy2t9wy{KQ`W`?)&[},;D>;H]5^~]ޕG`n]|ߝn g`woΪp`{W]ĩݽtw "wV I[ s66}=1k2NqgnT;5.g7q go{w;sL? ؽ!]Ck4 @g?ܣkVߢkp_w-+sA T^Y HmQdoRܖ D ܛƘ/gëQn.bV*=6|7Qꟛ*8[)%c,`lt`v/Žj9Ci' Z4zIN`#^iZ843;5yh/g1][S\2D'ݏ",du]kC; N]xq @$ii 3F1xTdzjl6ʴ+3QCmGoM&|tTC'?[g˰ [fCT7b~ʛ;#L%3 i @*;U|Zj`H/"zQByњJE8k o(6A "mI #UlZ5 { G=zV^[SHWZPK@.!vi}mki絾^q*!GAZvXuP4F[5jW$mU>JpHGO7BH(; ذL\G9<\v$HښÛAis<߻+H/kad^a1+v< x_A2׼M,od&El6E!hVg嫌xSd^1SzyN%yܳ e"zsD_G |-XKAJڼ]6%n߼d|S%h`74ɿڿi_Ax Zg㧳o.Zpkteڟoūfո~̖=Bk}6U spVOl! ),({4GK{On ї[/R)IP2ez@MՠI'wΥ*crU%5ٲmA˘ Almo`r?kS&랈+(1ĵI nc xxw8!+üoo1uHЋ 8mp(@߼;;l!Bǃr|V=$?vuBAO륨j0#dī-0=g|l2Aajw4׳Î-b,w,!ԢЕ==dLZF[uM<66Va UXpݷZPw{דu?C١G'Iɴ♡i:VpV '0)N]]^ԍśr/gpͷ[;]mF߆Q7{ocZ^R֒qF^OqhXWX5c!5B3\`9q.;s %d5 r>koţ姙> QGf%\~NJmdkKwi[`^_ώfM|Ij/~@g,_"þ>fcgg y m/ofZ+ z~gr5Ybp4;_k9Ws ;fw g0FW؞@m>fUuQPxst=- kczwqv9VwەƛrA=*5^Sځ}aRPtrW,!CķOG}#tn^_<\v(^-V6ޝq4[/A b mwc!4(;qňȀ{ޢfCvq䲖)Ctuh.YW-w;>,=-9EZ r2l#1 mwS[xx*yjKB3Wy`J:IT/ދ=Ug^S>Jb5BKiS):TP%*I*YkA*" 58sЂ'M^Y) ^s)q8#wڀHkJs pҊ/OӧgZO},jɸڅI:Y[d1*V֙B *WU{NyF|J&pP.Ĝnk"hA[ *\ .')!Fۗ.ЕJ.6j1~uҶ|WGCP=@4'=t|z+=fT_QȵR =YH*֪dȠTimgZ$KDv_K/zqZW0=;K~ N Pı8֡E{I+""q.*%-E.VA:=Das!Hic[mUL ')'9m_-Z~:[o| fo~j> VvQ:;n ݋O(2C3bXf#|36l9-[$۷.rV>.z:oԑ%BdI/N s fXipߞp6z? op/E*g`Nf&sRHuP"yOoяkS Ӳ7x"eґQB4K #6{_jBnrnRfi 1?>BZ+|2Ei3sU;cqOxPwoVBs6JPxw7x J|Y\,0)#oEDe\HX:uQ?a7lR1c۱^0} d() FwKy֮2uJd3aSFI=/NƚFBhHŢ60B^l^twl0ҭ77źm?e=BE&EL{L̒^W!d#{~]{^q / h,(CcOxmoW"hӒF U&p?a.=}a-feU,#'TKR** n AD9/!AezW \%Zio eT1z;e.fxZz)u Y I#SF^s}qՈ|\ṣ}fgo[FLݔzewIi2qiLmGRM1#tow& Լ$*&*cG8}ϣ)pc'C o2=?a~y9/KD~`Q ގb+\p)#So;+o'p΢O}`EtPWy^ӈ2BI`׬TL6kgE_.Cgjzxoѹj=gR^SL%QxSF^pΥ}t ޷hb*1Byt#~]45f8YTW%`&xzyZ'\7qIj250BR_yTVD}2_v(Qsw.#v~=+  R Ǭ^Sc]hGo PjiZ+>&|l9yKaOr9Ulm)m%*t7T;T ܔ̜C'N$qڠl92B`)d޶x Gz7zkHYd=RkNS\)#oH+o\{--eN [ F;TQ3B?&'*[(ZB1F>0UH^ӸtU@uJ>{z743 NݪE#+5<秌{d,e[P9YYzz~EZN.*l#b?e.}w*0EESBFdL ~ zF(Ѓ@TM(УI|ҡq QBƈ 9(^ʜHOUSal,D͙Vܛ2Bu -1&g#b{1eAuY 1tMPW7%TY`)#{",L4yŝr͠k2Qn]W#QJ 4 mH5z,g #$g=[z=2 5hAL-&zwBvlm|d(Y Y%)=RFt_ڗĬ(J,l( 7߈W7e[&uon_s@5j8z7,`ucSFZ9\3v&eTr Wݛq@p=NJ\gJx8_ > &Y#b}F)CCCɑD9![awuwUWUimKk%3((dQq7(bj\iZy* ټB(kBKoK NZIRz]zujQݪcs{3^*J*,ډRXLkW1`fxY'Unxmj_,Uz-9ּfuVd4VQ=(SG_Grjզ&%p+Y|oSwit}OS;/f |OJuSyd I'Me@ckyF^ %اx bOx(aKKyctjk;URk7/.׎:k{TtKlvgM忣?Ն}l _{ )EtLA N8XS;mAZ}ZU?fVяIm`aS@5z*z7R&+zA=+l'Up%;`xųCMV6<8Ϛ{c&5"JotS3-izn.iZYo[֪Ī' Z5/ !GIF1d&/#w5Wp:W*O`J' K\6|wvTdrggNSUY,`RiIm"@h vLTeī5.IWk0'S&fJ˄? Zdv^s}^+/j>P3&7PO`m tp.p}]tw8Ii[)ZWE3sETalKZ(O+L_|{viKቌE:bӣEfo1xX{H8TR,u>PJXD =`Za٨ aKM Z] 5Rʃp-SnGg Jc얕 rӑqH- pP55 hX[ _JowH MġAkB2s .u H@JE(Q,hL&)z jSNʃ*gV22vJ2&D2-e F1 шy9c,RN9P+iltg;!K#7atȏ}ղVXP)S;X6 0PxZGK!3jg>LLj9+NZIg*9+@\yIhjGF`V^^Kqm&sYԐl(V3{ GP=[ =J<û ޭ2[O58C  ;6 xZg(H*ސ" dW9v-4q@I;`,-˼e2-a)$䌶A[,@V;тʰrs)."e {Eu!*Ekc#cL!ABc,`I>5sd1lEo˲ҢWEE6x#+ 6#b(U0F"p VBRFTY=:R")9IEdofEwZɔp0 AP$@2A=z"o~ Y sfaQ`c"%n:x$^(.2HDPJH8kiй jMaaĸhJHccqɵga<̟YA:ފ91:K7GBrE1*Rn#)E5bYJfpyx8sS<_C70~-Y9u/y-;1z Kpy/*^s;:nNwٕ{XO![C򶔔 v{~V=ٚ1_骻c 9IoRPk8\m{<%x=ݚ= gE-6`M@ \$D>~qS sgDnIFl濕zmM7+z}gX a9!a¿O\WègvV\kx:|:} 6#$ b͗n?]Jr^KN?8!/ g [7枮!)3;X\? i{/^w'{%h{AVZ5V%Zr89\>R(/o9t9eRAqk_w|KxEc~8s_@}|W_o/>{v LU$H}4 ?EGp_G׺y`8ZtAMx~1krMl-\(8_vEua 2O~ҁr, q_A̯ݬ֩-]N(!Čz,oۯ o+ڶU}tL>)~OH3BhDpniPS5!F =yjkì cFHzho\r=P,ɋu`K2,0@M<`p$`QZ,!5*E·:d:<}+cu44+T@JA{wNk xڝ,ewQj9E9U\u_z%EifmL!rF-j1-Ki 9?. ILIspOwBfsL_>({s/~O^iG?8wt"W2~߾yy Ԃ^H6ր &gwAe)[n^LbZe%B9ܚ:OG.Yxgm^w|O0|g2@ًdt:Gd~7e;nnтt,Mi3wnc \*$ G9Ƙ1A!ڸ)gWi#t |}=+ #[犴VV7/w)L5.Bz5L5 췑jjqVJJC[# 1.߻K3C3}sFϊMRE;Vӵ+lsC⥩ip3 bʊdXqRfr8*I%[  .>1idPqo瑧D;L̳lw[ a**g(%\ŀpjlN(aCY VyH,Z{a3F?֨-,ml+܉f,u>|ˌ `Qx͂VL" KErVg) 0`_y+52E^[a q"m.8OI,JE)z;7)Ij1'1<ُTZ01"&!CB*CJzL`^ˈiDk45[!-ԪTvnp}=( ^s-BFf1̓ȃM/{Ƒf Eڮ%@,`l&/3k js-Q(j]۱h} y껜UL6omҤ&D9Tb6Y$3+$"FzHSL-^rJDD0V-:qɲzY !sP6lCr6J#TEt̹cPa2.| BȥaUV{l~%B!}tBb@QD>kG,HUtH:fDyF9Yr6d=CIԜw$eU^܍S5ԺоވVRmu ;"ףI Ypۨ#1YKctSt-JI25mRhіrm_'R|0!46BZZ壝k"z־Xt)6d%pMNԵH gE¥: V`fY)dWh! Q S׆RDj`rNg:OXUEF'Nk!O8v/up;4VZʘ5 I&jeWi(cEע1Y cMmV:"G:k5 E 5Z0+3\i.JSQ`hJƬC7,ȡ  "ݻVLtzRL(1N[ljU!RAJJ#L2b@eCdj0>% v+u z^F[`B]Q[pYV csŸC@~kJ ͮEq& #Bl ( f|)̚` *QLc3إ.J  P S1 FiGSv2/ͬ#kGRH6Y'*85+mGu2Z"L.5=+))(}Ju3:q!GAU3_Mr^KY&)E@Bi5'Z.  v/ІUTy T+=sA tfp[d ۡŌUDcMDs|JT1yQ!BMȿhMrZG~\ ]xc#XYd :AJrQJ%UA 0 CO`2Έ A9J+:v]V :*2MʴN9*AcJAҙ`P܎m@}K72YЩ֏D>/AXEYь5YK1e@TD4XAoݷ~[,di"YrB UF-VXB(;KSvhC4*Kt](_8 0 5B5t1 l]f}JPD;뒄 X:ӻ@z-b blw !FmlB[v#hXh3@P25;{OEQYLMF"Fiv%&POO@ˠUy㠈Bõ `flq-bl%Vm[beVYGو$Vʬ$"޲[HRS[ u/G$XPhf %0ئ{ů%u SLZjmm`=A;w;ͫ5@rhNa\}EuLg~W n=nNNip$ti`+q0ۿNd6YUۆaMEZk Q^yhY=y:BCI;Ftz6lDҌؓU톤DluPmkCdah"Y]fS:( ,3UbF(z De7P}v=Ö+U|v{+BqTz I]@᚛r,Z7y/i L+J,* X(#Nw=P@Uj?d*E<{@w,[)lӥ8}믗.mq0E?n@'VhĶ\wZ^vz^/^tvysnCoKkuw(#L q 8 ă1.i^K@iظ,KLVyq@ SSR/W2rzJ_6'4WffEyy!1I1!jŇ-ׅToм Hf栊9[q8ŜJ{(Ŝjԋ9d,v$`2O^*`\=re66P"1L\+F*Z2dv,˕5Ni{@F y0N"ɳ+Bi=gX[o%1*^/?Xl`x]7?~[C]mƥY>"Zcwv<"lZQؤUc͢g2e٪uMqE?Y_X_O~Nӡ1P!Pi~3ΫZ];)_q:DwdfZq6ګ=mQγAC bnn.? ȫ)sȶw~vi)lY/vDiEbHB!%K/(g}zZ|?s8kuvtt3W=9:l]B/T)~<O3zVtW#2Dž}},B4g_Pn^w\_/r{Ε+coRdSjMKSڇ"S)jY{Inlvكt_PmyG{dm ~ؽٖ0qJDޮ~vOVjaG:!\I-Gg_/{zt{>edYpulD6WyePR'hv魋~rEvUe٢ hJꁎM9]2^J\4;:,Ϛ+s64c\ )O'SR,H@zO[6=:jQlr|ͷdʓ/ߘ@O j a9r̃؜Y,i䅉"(LU*P{*U-5u0α'c'k'Oo:׶ zs{sRT/ /AS^=~u/owMK"w?w?tN""Ew;A:CU/jYr&h:UBf hx9Zg}⼇ezBGXͫV ;~a@:6 'G}5Xq%%硫Ef(!d[٧0ħ2xcw_|{}Zn^H5OgobI쨎#5Tojq ^'rK}*K~;7ޭV$nj]qdžw՟?Q;'%;q1?O[N^ڌ;6:(n?r~r0FGWFWG?=MKWt^]^%- wMuߝZ)+t(Jf}faK<`17{3:EhlsŢtʾT eW^! l hKGK9TƤ(αڐӽKkS7%1x+GɈtO7U7D[|IwZ&`mYVdgw }*Jf2U0aǥS"eln$$( X~w)^~lE+M3zPL O&= }=<?;~)<˲&}~qJ'Ņt׿}l\|A!aed;ǧiLHw>Jjx3DP{G#TBv}yCћOSEe{"gD"csBBN'IPz7 `c hPI0(1DuU{Ud%'?,}Xjj,Xqı$ 6ęfnll0!X s\`b̜3s8{S ]P6AudmN`)h8ivM JƁ| BqKn@=uZFK+EʱcqHuMQYJ74谡XTՄ8!0(YfcQMu/)c Y)<EAςm0P8*x5!8a1nqA(piAh.wb~ײ)]ZjFU(7f&殛ި=;74 {FBQ$LPƅ+8&:5id.,y  RCH0ĨTI BPN%@3RTjzo4Vi"8h19 !k+<ڃ = Xj7ψ(ݕxtTA~HGeܟu??˘SL~h5# /plXVvV\f+ø翿ܫ yL+~jR az9F,m~m €|l815=`)`\]=;0\=׫<+7I`)(28N?[X^C-+ {GZ0a0ui* SZ|:]la4p|WxȲ]˞RV4Wnq8/'~+8fwh6dQhl${{`F !ow?Hm8yZmtU'OGl-::8YPS^a_H|}Gվ3;2Ku2?=y7ߏfY֛c\Ho~Jcx ('v`_ `6Zf/BMS̃ME@ҩGK9RnP t&Xb߄WNi;a' J2&bzEN`)ɰc ^V*{'hB%)~6yN@\p+'Ch$S ֒b0( WԩRgϏ'oi.ӱhb-*Pd63QNyWlYnWK_"Z!eO|&)D&DcO+[1HB#JIv8 _2`I ( JT '6<^&D l ňQr `ʠͤ9l&4W{͓ݖDx#Dm$Ku߽.Ӻ˫d \p8#*Y M ^Uh' ykaz{#UӤn/~@|mUYk;޿9C!C"tfSLb2r铘"GSC4zGe7K8q*b$K8! D+ÿ0aX[Av)@zh Rk4-se4 #c1Gs?>ZX99mJKrG,P.^5 }r:5߽>SJ :S{No d!F<ۼoZ ¹_w__ĂOO1CѸ0/wh\wf.z^|4.Bh1'Dž!;Ô}KzpS0S{Y! #z5 ?_~i:>jiߡ D  a`w_XR<Ŕ35zefvY-U IO?N\ )l% 0ֶBB|N8⏟s^ASb88_|o,}#,tmԹQF{[Թ&N#gP3F+% ;X <8f0& boiާU׃$$+>uAk@r.ǚɁ{CHV}F_OӔrJ;dx5`;&#BKxCzyT8ZȪ ub>>}7o*w[PӗIRxEN}'B~@}xf?:NMܰslOmU*5NIdgɗ/N:q%kw cd]g}by}ui0I^ǓQx,ćMa: `>>"~]m9 ^*0t7ucz: 6aI Ѧvx#!::IZY=ѹtq @MP4 :gjv1TaD: jF (Ob BxJf>ljdLv[`R_?Z&>Lˡ4yFL6n;8GQMқafMW!R0|wR328~|{xdw\E|i7zG\-*ہ]+V} ydN:B|H4N$a8fTmFRGKB@ iE0:_c !*ͭߦ)h?HӈxW4z}hVWM^]:ޢn?OҜУCoT}=ܜz-d-7 βe"DF+7"!GU]ѕb,&]jel oQ-MBEJUbUҹJx蓠kl`Q|. < ۾|4qup]PZ;xWD Dxlֺ=לvWݍkS5|FnߌAF,}ފm^`GtzLue+-,h{OQ\ 5_ &91Ч/WUBGgHD=5~/ ?-k*.&=x jX rTvT5&4c 򖬨Y8!6'4r 2a-[¡fE|ޙr,~z9WC0µgoDh|aI\rDx`D7I sɛ̴F4gOF5_QS9<8+2(5o`S' / UF9NBhbj"S̅agF2#IpnpƝO&Y#v.@.Okj![ ߯zCR)$*#*<P;L–-1CQ{C!I%iㅑJmc},vNxMHW0#wn s0Ɲ n5 /2]/r Aԓzh0"Y߮|~ITF6n͚ <Ȟ)(DE" A J k"hLt7  Q؄LJ;0Rʃp-Z/5^#֕ƺӔ)F;' T[  KCY=Q­YʅrM f(\k ֥J8k@Ke]XWEdUK "ED K ,:o5SV/WGt)duX[,bzP#Fh<T1FJßZIcu:t,_SE~YZS3Zpr"x ][R`P|p 7{ܨ>ސ>S\0s,0gmI*Jϵ,_4aw&=׎ᶪHl_ͥPAO[c 5B1F: lp6HF/g)Oc.*5X@Uv]rax+Ltkk⭲1廱WF3l3 ^Ӑ8M46dA2>rl ϥ N^eW&v޽ί}%K(\y٦/mnw6#1o {鷵W;P  0d-ZKi RQ$$o R S Ҵ{e,:u~?0)np09sH[ 2S=Lrb.jtHL`| r B($S1f4j|sWL#k4gmVθgV׭Y&=@+~:ƾ~~SAzS-5V=|4nZFpY{7YH]|1:(%}Nջo{k/{zpe0|/E+\r,01{/>RO/dEkyp'X,:Y> b#%HXDvS3!?8Hg읏rӇ d69-r͙A6ƒEmCo/n=[|#10d9bg Vkxs$Q=E Fq'VbB)7jpneZGEdQ0A[H"D4O 5֝wq['UjWցOs|vqš=0^K|0#aF@S\njlj$ Ϗ >IS}zarʦ(1p}D`E.}:Ea{0l]=z!Vz[G%YD^=(9JNXRuJwfxqT}z1]p t`(poNV?[C?+&R2 UN/@-nVyx2,F=*~R}~%`2xY fcv9 g]=..믲6$xk= 6v])J+`Z-~ҕav`KC].OVKT~>Ȇq㜐 !F9e-p*hI)SgSET-񕕭cv2g#1Fn#!g\;Lrey)z|?S[fxCz3ݷSK=T81Eօ[1R4zyS$w\rywaYfXh&nbK,1x#wVsL@ R c$a!)AyX) :R")9IE$s2JaQiƝV2e (B?:id4Tԯa8 ]~re]gG! z *#UL kn?z쯞],@k̍k XJ06V!\ xjV#|H*ҪbopR\0(bd !nŀܜ(B3;4ا36`F)g SdHA)nߌF/Gc0gJ`̬! tKsyw9Qx] |rW89N(!ŧ|fC_ʀpҐOb{Yk^igr Ktwk<o }`"'d6am),kU.+>U=.gj2>^8ZfK:#$Μ"{'_u!Hɸ<]xSo~(^2VQbj%5eV!(D C1(p0q1⃢Y3z4;o^M۩փ\EuX1h>Fi[9s> ]jz2 fd} }_o6*- ?Uvz0o~71QGst+Xq[wǭ:Aۣ7 " j8MZ-؂F=et1kr+TMtKgfm@~ʾ|}U׽QUy:zN?ň}ۜ[ d Y"`:6x*4IV3!5*EMF6uVdL4؜h@kB<{(b4vU\} ~R*V8\ 2BէlZ\ +%,R q"[U/OE|`N:]`93,j[pέ1y1|oiV>:^@jkId+hZ+z\wH 8vyXr!p)0`Qi]1wqɥX~oU;-0t~hG hyI)B8^͝o i+Q %m3a%,W> k*"qAm{jl~vq<)c0|aZ :h}0&tk@x4ne|`ƯݙeWaer;"йX>`N(s,`nyAz.Mv<҄Iܟ7lmmtq+l&M;Z}\@sgZn5-meumڶ2LTq͒St% -4Lsqs֒ UVV [|N%AV.6_x_r}RZ7+k$lKg G#7|F$<S;n[}>YP"Ol=ڰ$ʵwy%PW-KT{UNFe5u-#2-,WmJ\ }^ƴMw廨 d`0iSX:iM6 į¬.p*߾ 4vZ]/8[*0m|W؍jؤtY[@,?kITE(S']Yf2TOf[00orQ몲{@Ou%8 y(>Y~D%{Ʈ`^.39 ^=dsoa`1L?b\~Ÿ7')O列oFCS;{gfJN8t_b} -)8坸ΒFF3nVw> L{z֒+K8cF1i6pac\ Dsp^{c䬟3PYMe2ms}8..d SL<i_KI S"!&"U(Gvfaܝ$$?T_)6q+ZcbKj:SAL!ԫώW'EJ_j|$$$mk:KslWxkƒ6Q<>qb|l=*;:*/z.j$y=#oD>ί_Ed8]o_{#cgFr۔7 Q0"*4v73`vFj-qA")E%9Lw,z鐷} n4;uj z5כGD ~JM*zkoaVhIԅ~G2UR3^VvhFjeW\.@pN̺eGirM `+ព^Cd4z~h:M@L?>g Qz7t(3,|{j2XY G9L Y,{=a[#4`76\-ޭǞoT0;*#X0A|D]zMu.#pBäu*8:Xw֗֗ sq0ignLaVN٠0߁ձ8ɞӻqX(:)hP?hZ>sVٴ~g{kC!D!e,zg7`{-#chnD4j;踵m^ƴm*/4'Rtgx1k}j:~~ԸeWZc .Zʇ-:i +oLmj7tT:hsףign֓\;2(.0I[u^M`Ť~yz}CNm!}~7WܺuT}gq{\EfoX*<~ %ڪۢ /&bvez\y?wmo1;J-O} ux>Tj?]v[I^ 1q|YOb=$"~d+D1ij+1._0*kvS}s;XItxw؅sc̱|:~ &Jy$&Kϵ,wJÃ&`r >Udg0ܞ >@oa!`C!fH ^ iJxPba;3x} SY|n~!=7-旻w ~zڐ F. )K| OBWY5lrAۿ܏=l6]_aX, , ǸOvexP5)< 756pvd[!H-Se#IyS$WI@kpO'VϏ`e>ɯ4A}ŸFLJ?oVv& 6K ^3( v0T񟅰W S~xP$ "h* . `mK4(hCE^>?<.+;Uo>f5{q<ķd?%+Mg›E {]GrW@c D~>b ty<:8z5;aUӣ%W2:78Ŝ9s˭2N|*`#`]2.A-4A*8ʝ7\ 1qV{tHbaJ;@ށ8a7ǻX?X0>_x.imut i'LMI$.&IbR.&IbR.&I|a/Y i57 }al]yS(7 Ɇ) qixi2p7w &z3y$sWʱƘvzבԹGFS,̘ ڂ@P1G EέRkEdQ0A[кHH!ȴW}i P51<(p .B%щ')gY n3&@I [" ?G B-`+=1ŀO& 0|pE%`I nRᦞͲ&v^?EV+hY7ޜ%͗uI Ceo3!֋nN9M5D%Ԃӑ6Ϩv^^w+0C1OKu xu}b⍑0+X^dL -j$ViR;Զ{?bgKg ]U=It&Bmw/;(N<3ؖg iL\k,}]v%~Wbmcs8h/iN$!Pn%7&rø7JqÚ#z j*Pw1r/Fz@CR,R–8TVsuC:(ơﭲ3$*yP!,W ?` ;hUc*/>Km8\+H ĵR0#)6gF'GHAZ9RCCgg/4jO_%lJa^iGjd~rVk/4%Jx->cNX{S< ߒv,ar7wvpF_x(=݅aqXp'ʲůa *./oiX?}U`g=7τ= XsomYc =_3N:E1ce$ 2꿹ilCڒ"XJ!۠LBgI1餭C^Wo~z@YyNDRW0h V;"ǘAjݢGsN<--wR|1 !>onkkrn;n_U;_pW;D۾[՞fyt̓c<c'ud{# x X?/YlgEY~z@r- HVe[rEOE6 @~)uAZ"MS-qHq|nGm7 ,3 [:!1<1/>S牫D7l`4O (hU42a'ZK(It yIEt$LEt!"'U4`I!@eE.qVq@Lf r^ H{@ Ҵᕉñi8XG!1z ",iP΁8KFbu k0ZvaZiĴư# ņ\eDmԪ.yy sxhow}9I8egfϖ(1ː2’\;LreA;Nщ˰3O/-L=y~w5*e_~ivRf81Eօ[1 1zyHIR:d]DXo,kp, =G`;%H̥Fbj.T &3>Ta@!c1pBRF ᅠ.gБ HitL*" UaQiƝV2%"8@ (O$i!/LnW|,3ma!Fm? ?X H>Rb}h5@sq'K~dVjM0ĸhJHccqɵ PMyd]Dj^+sctbo,0i{"c֫H/i. 竂w$+3dH:y0.k/059Aac+5n4t+l+z`u}_;3p!lZհn*$"+>Q~˖dkV $"8uIխgix8r+ mqYy? 9'm8}ϘXYR%9[?JZZ{mɉ8ijw4pHə!4yo:%x ]濕@0! @$.zt92dt9Э+VljpV>Z({~dE[i۟j^۝/*MX*(syߗp Y k~;u$OfTa0uQ5O!FCuo7/gA0[H.958?J Q1 _rr`aajT+$C:mC( ( > GY?D/|Nf?jp*AWC jX1hIӴs`G>b2vqj _/w*LT'߁*:~~/߽8~u|Oa(0 .hCAQxo%?h-oA.¨kw6hپ[,Y(=*^}z:*XIN<~2ϿhfO9/e~\Jrjw Q(mLz6Ppo -kʬW%ql.?`':ΐC GùAQOׄ-h$:IIz .xN6u^qy8p+L!4K#PG@O8F90xjSH ag[6 !t=X0LB\U\;I5iO:ҧ t˘a&x7IItdXD# -J)].dй8ZB)Yi:+^N3dzυ_m >yp#&hP~L8Y>+'bWO$a` n4V:es:&'/,)U8M7CX;GÛl6CWH7/m-n KY^A.2fQ,A)lpu: {plU6Γ|2Q<\.o<(^,gٻ"+jyqgjW7磳dkwsbH]%|0*CQWiTWŸTmtt4\T&2 `+~fW4G&k%+H3oKA#.S Ԇ2xgPPJGLoT,Tb(NQ&Ο۾KSEv]rvPix=J&8%I^ ﬉e&o;(NFE2D ׆ 5W1^X6zaxm뗎zlWNJglN-JIhk Ts-eGn,4AmGpt&°ɖ4rWi"]w-jļT/_5w"f#}lsآ=iff!].[`T|3"aKpX]0g߇ĸ+m|.](N_B|A0YP鬱ke)WIcۊu^^HT8OWfRRUӍ 7_b-^R5yl~pX `fBX>̈́z01[f²n ՝mMo{DZ'Xe>imw1YX$l*?yM\#-M -@ciI>KĭbMJÑJeZltbI"D)ʪk֚2$(`5`{_sߣ (JbH.~@^LBX$tj{1@{1 G/bT>$uÊ`UBtU–͂QW(̗1-fʾPel8h\P̗Yê #%a{b2UM=PsS`#=&o~ܒˆw~HO4{15CoG06TT˄j+ӿNp9/(UqQ040kOɟ6Ox s̆s~7l6T/Φ[? bGm=7gN6a^gb;{p{Ee~ `Ng_'SpO F{r ['TOnSQiK!/n2qbic,NbAC,'v`2CJK],LST;^y4N5 KB uګ5:.83BH9MH%ROyy4/Pg[Ъns1st^7>yW">L“gff$݈$^8-iIsTJ.FQi'Zmq: ąHVs-hF7KjuLfڗ,jWZp $1Ѯ%3RBk>hi4Q#"c>7:OʄcmɥdaP1ǯpQ']'Sr#?8]BfzOakcA% hr+Ƨ?Yw̖rhKQ B4wI B :)6A: !T,">&mH4""'\R h`4mҥ,l'"Y[AM*FDc DnT?f2n۪e04΋adJ9 cFEN%A g*0ぎ(N蠉a#¥[6N`ee/<a}0^!bzA dK`QZx`O Kܘ(9jjs1"ȝ+`sYxqC =7;(!`k`d•@ЎK\i &|Cr5C7D(QF#ZT;$X9Z/@턨 ۄVyXz;(1` D}RjI@a!团LF6, '7PeY5~kUl #^wYMlL5Q{yHTQr{oI27(*فSq.c + ,Ɣ. b`%K;Xl+ZJfݣ|_c'wQ9E6 o\1t*l#5OT~ffzͻ5kB֛?5n:x$"!A"UGAXK%րŏE?{TH-O!hJHccK{jV# mWnS;qctbo,0I[E1*Rn#)E`k]_;p޹z,Y r"_c{Oa ?593H9ct`#]|la% K'vχʼnzẁ=))yž=QLJ_)ZzX3 S7FRP4yo:%x ]濕@0!dKs=(@ןs( X-hI~dE[i۟r+NǻG% ۝ 9umܼKbDUurU?Qqa6 )eaRQU_/w* _u`@??xyyח^?}~x:>ዧ|U[P.(oۍx. @mxs8\Z5tF]\̶9G%bDQӡ֟<>5.Si q_AOM]EU$gKvUV#EFtGme[L8i[/.Ԫ?hINۂw 9d:}4[TyMтFOs0Q$}h\Y|z%y:/,cMzX&aUS+S :0qV'BhG p8ra( Jxq:[ԙxC]. [ 9@?wk+hu[6ک$s Pc|G#Jȍn2Q:(k_t־Ʈz^Uj%4,3caZ d 9-J-AƒEiCSL 1aXӍu8s)GB:XN&ف|2e)G⛝x@HG1?{W6 }żfv &X`ѧ,9UiQE]6رVwU:ƂaPms$Q=t'Qh(8 ` 6 l#"(lX+\$J\.ap9Ѣ:!E#||+nZWN:ҝ|}37[;&va!6{;3L>10UbI"׬qnq݁#flV z7,[s,C4)!|MeJr4)!QKgJHT6Sk̔@b7m侫I`|'x賈jhN_x *'\{9Aݓ>5P6$9_bE|F8UJk9Y4\ ;Kri_5OLj9+j$Y;Wچ=i3D' `VH5dSAzմ͢ v qf'Ti\mW I'&iYxT駩Wӆ$Dh̀wQ8ngV[횮x;מa}E6$0g52ń@ZϟZetk:YS,cWf!̡inWMw^Vؠ癖à_j%i̛س|MU2TÙ^w4Ǐ?niB|!]OȗoJOl䙹YIOP[^ҫ/!3 sCsKFDoAs络C%xNH%14h8 .3ǘH`A 1hFϱd == Қ)>S"!&"@wDðqM'xl'A>B4qΆ+D8/yIC#!o}uqHQ0"* N:WxF[d<9B KYCrg'-6¼z{ O4TgQ܂̹:lO39Iru:/䅉>qݼۿNҸ`5.cO.7:.g=0K LMn #z!U@ՑH>HԔ1тFQ4`pH޺ۅuѴ0nTs' m^v6=bZ6Te-ߴ|s|7|6A>sYPpϢqN-{Scj|Y3pf˘-c14CkȌ2ŁbjjyZߘjWNJՖ4l7'ݶ xj Q̷R܂ t +Xl?2W =I\ʷ>f̷n8fO:^wCb̪I+5Q!gΠߌyK̰\&U`Uq0q[ffhktu\LZʋAQaې+Wa|)2a[24?X\ȟzGׂ0>LWi2J7>. -7{d?߽ߔM޽|0Ï~nKw}5nUO?D-E{ܱ7nTȀ09aDhpS*ôtSE !I%i]Dj4vCP q`:-RLp7*`Y]~{C~vjc%" f̕4}/W%ۛ6^>Ċ<֤MRH<)f8 ʰD*): (D0!:X4Iˎiͫu+¥&xS&1jv ERk `т^ȳy1r6< -0Sew@ aӀU4EM *2rk8& >BB>uuȌW.u "\Jx)d4&jaS@|-baǞkyxL2:-e1J=DL#C4bpN#%sWC1Z;!Ok0T%tJ%wQ́6֡d\a j́G22[*a]QkL*R3Ew̓ga k3^T54ٗn `k U cJ#68R$#Z{)u}Tknl[׺5n}v'Wdzftm֑ H쯬 vBΕeNF](6ၱK1sqxmu[׊j˝I$ՌqE `<@1 ' yyqsLy >ha\$ \ӻ^xxgijęsH[9HHP I.L`` ,`"j/rEsFSp-<Ԕ3Xqɟ E6( bG.C 7'v*'Stv ԰׬cquw;g v>^EGrJs^Qd]J,SH@`)xtx['u;s5eGsV/hYYͅ l[.:J6< ,$:(oξS#ށvFǤ"* JBL>h ҀKۖK'AP[j=q3&?  qL\dp ŅT"WIba-X2(bMo} Rw9qG\MaTz.lyY<T[yCqMu,0[BeՂ[ӋM9TOuHGő{Xz;PHLՎn!(Z;B C_'4iNvFN׏JQY7jݳb.t% QEGKRÐcD%YZlQ&v˃Y,JKbVJ&_dqy}S' DJ 2Ya9y`d`"mҚ;Қ;ӚҚm%}1JL 0 w\{r‹Ť})'ELI hhELz_:f:Oh#|iꭽ"OzB$2%(y=xj ;hux;}cpӱ)4oߢt~rV,gXd' sԥ\njhiuRK]^N6 `Lj!R)@'P{z}d')ۤ1F+5xh.m9].8s;!THC )y[*I` z t֞q%M^㯧~dr)dK4R'scx UɕE!je 1԰W+8`[/kǬSTD$Hn>#1.v]+r֋h7K5jW!}w,WJΛ.%'4f|-z¢oMk W-f<̖?S]wZlIF.%$i]tڦLX%ڜTB VLy/]J=EnnNb7N&wV\mJ`*d h HVd a{|3xSb.0.Q) = :xX2zLZB?_X' G:ِ.GcBmq@/ >A$ӥ E% ΪzM#+ ۤGNt6} XBf?ײmm`{w vvǩJzrYvB̑|Ar|| |p:rœ=sϫ8|\MOb/c$cIJ%#sCZK*$&4*XledlMwΰQ7h뾭͠~;@ہ΂(A+"߰o.vU0`}L)7 BdX7ix!ݿM;|Gsieחxr'VcH0HKs}kާvb7]JSu(c\2aV ٣>V h'Q(S@}DJt&l=d68mg0z;mDi҂fI<{Zai c mז/eQiZ5=:::J?hQ]J ?9Ʃ~k%ŚUP_p|x61C>LW_o .w2mE0sтKmw;[#un6K7$ e+ 9a#g{D=N }bZ0FMP _2@pXtoCc^7)l=M)htC0C{痦 f`֋8܎&S`? ַހ>=y'~@qp7ߓZ[H/-0{G/8+yc?֐t:q[[b320s9 %"db{lﰊ}VN @0D@r0YZ:BHAjU,5kΝ:Jk!ǜx:T SQi"COZc*]gZuFzg44{B-2[]NX5oE((BVGmвG# Q^͕Oil>+: 9 $G#싶lO*vx淈٨6b*F##]#f.}7`RMvGm#&~iAE RƢ )Bz#dFpbᾋv>I8[>˰ٸQן/ ҜNjԥCOāV.y3z(fgx^yR{i^shNOT]閴˨E#]NQ-ھ~Jrlw/ $^0'O<^B= 4w,Z~l,MWfuDȤv /zg}6'#kۅ2z>hZܓ-^ӽAO2m_ڟZ%vK옦+3Ui$c{Mזǣ;nR'-VvIj(J,aF͖6hu?I o;cJLIۢ!mi6ŋNPiXW"@'=z-> V?_mxj(oEM oG'썾}-CJBI|$){U⹁j!(? 1)QLz!#+0KޢZ={Y Eֆ= rYRa몣SCP+k< OZP׋{yLH0,p,j!xw}|uU=uUUҩ^]Cu걫Jtq!]]U*;TW^Sq] vdU%׻SQWDpR9_WWG])pP}x]ʬBa^]O7}I'9\Wwf:47ᴯLHa e} i, !HG-jt.JSM/\)8`%pűۤ|P@׻ S\g|;>0MMtmLطoNfb316cwg}㏗!|w# ]ޚ۬%OF&uש=I8H]xDp\d"'vrڠQv,4 P):L*JyM4HF8AqhAZ6];zTvK7BcDrx4\*+sQĈh]!9frK*},`]\AhcJRYb%&ZZ.h -A#7[03fddd:IHU\ Am3ZB &V:o6 ay|O6&8 :פ|1E]H8WѷӔ)htu}3Wztyu39'ˏyj;KdkIZܫp-)91Ғˆ@f- \JB`2V(:u&D&sR@#tR:AB,A2 :u茜 %9u畒ȴ6nqYY9Lߢu-Z?ȒьxK~MĚ؆^?{϶Hne}3ag7r20"ik-K^ISnɲd%˞`|"YU*%$x+6+H$RVA x4-4" tt(oWViLDtYb=WX!F5RDD5D3h8 K%B!֮9KR":"?w4ĬJ|PJrhTwsi/ST@;l?Ѵ !9Q.!Hg9m"3J7ԱUnj /BGHSIhJ9 Ę0J4p!$.}"=Ut+1)wwDT"Ȗ+zEp n:)!xCOMf|w2y{np7/b@̭F!.Or,UZv`j'">V8wlTa4wNəVS'n:~x{2!^^xۻr%#)[{ן]^(f4s mpy% .dl1Iωs<>NLS|ֽy?KS/՛g˂`&z̥2ZWnuHG?_g&>:Xa}/iIƑ\?e0y*$a0ˈ 6ѸOy)> o =sp9mmͣ'4j\9"=#,}2@z{,OOho\g*S8[`TOȊNO?ߟotǏO, 'M hl~y>CϏڶ547C%^¬as)7|,vX%q` 9ɍA&ZVT[g;ƳI#wU7uPnŲ䶼o:Ģ۬p#Nu]n-^TsQ4Eo+̜rFet !!@F:<LH!Yς䗄1(#.!j3vθYJy^f<.+mº. s]g[qCeYgov u_5(p8<Lgocrbx!Y5s8ѸҠW"hfNc6 ڴ[^$AaI*IC(P*𨑘B$EDf@slq>$smPeGc!uS<$f|`DTQQͣ@AQ4<6hB:J'j Q,RӮ Y,YNiY:ϖWMU^; 0+ Kj^MN .%'|Vc T gI0pg#<0ϟUuJ(!b6lEbٙPX-E0'eΙ9 OpeBR 3\^<;a"qc("|0Hڠ ] Gljo~]ƟG 7u.&_̎GmᾔB-Fm X:FKo_HZ7 YtVOzF[\kF؟$aٸ*ѣ̧:<'J7F/9 Ÿ<zM|Kμ3YF@:+YKvu/ϒ,W@PP,qs&{Խk76%)x0DBD ϙ+%Los:/?X_-5I!h}"BlALK%"*0+9JKI V]T tL$0Kmȕ3%sx=EPpN8ۍ]'*k8mtS#w7p\0UACkSvU.ORT2`fA!M`Ro $32;{SQ~#VΠ_Ϧ,7brwߓb﹯o;kXK8%R={hAFZu! H ^۔s*S}|{~ Ox`gr+:Yf;(Phr[كCHPd34 ^V7/<(%PM"&z}.-L%ʬbN үm-M~<sf*N3ob k-}|3jڦ%'U*,n:9Jٛt@V ?oɻ|3E9 wpb:2iaieߟzX:<]wGk(#̎0_0W4B„Ł0\05WN>J#my8FȖN6x\.E!qcr+&=-hhPk\p]XZtuˑt{&֟u=Th"La{|a2Фu8ѸӮtXg ڐN:ײYm|ܹu7#EMoKl]w5o8)1J9C%J$!i\P]ފ;{A9cZn MGϫ:}^w/joK^'79xT(U rpsrtgI %NT۫Kēm4:]";*+>,Y`c!<2YR(.\ݽH!KWe,u^3*p.nC/wm7&b va׶3=\T[jWǓ:YjCq$ZHE|!P'B 齷Kp%:kSNQ\:6xX EE$T]@ NNѻ2rU}{o./ ΣBSTR*o yp׳:bK|KaHx{) %*4ZGO"cʂԁ@ 2*2yURuE.0ld՚1 ;$ĵLYosNp-~WfsS:EwY]/dbLKѺf=VUjx<(L""s' %4w1yzcG$X/g6]J.jAT pIrm#0bR=@U$yP K!PBI#dH:m m]φj<fݗ&R!B7 UV&I~[ClHG_-E颉r+JK%9U( a&[vSfKä֨D.4Xj4˨Bdj}Ж`I-Jkh6y(tr<,rDr|Q Gڡxh HrI@%{G~ /a(>(,IB+2s55'aJuDz5LgQGaF'#ŀ{+qAd*ގc1IQQDp48fLJJH,uknXcěofs`́A9́SZ4/iec6Aħw ә&[gr,]nG~,~`,r:؍a,#T8m"~g7CRPvbZivtUWUW}|&_+oM&T j<}w{,$<\Aq~ӨU{pӨ *0)֕|(lp k>7reOZ\_Q1vI B݇:y(V \KA]K7hyԑ hc|e[_cpq}Zגټ-5H6ڕ $7l0dBiZԃJ,s2JJϧmKc)U2xi2%(a)$$k[a+,RmR#1!awT1E' yyEZ*PQQJ O kFg[W&U|l+[7ǬH;ջvGUA0I}@ELUE J2Wz &6/@`9G)TQv`Xǽ̫q8[IT!^C9NpQmGsv%Eʫ#a )$ọٮ.W$WԿ aUV\$> X֍y_*TL!,Qgg $Cd !@2HɖC-Kd#f' tfr?&ޢiOl'6}G RNq1sELDK(F½Qf/hv=SOtFO(aOH) = ۀWJj+w:UdqڱAQP.e@ H2 ]ʀt.sJ`Y.e@ H2 ]܁ !H2@FxO)#@Q4{fjKm`"mghPh1Am{鲥3hwc޶*S5DUui{ Ͷv~U:чvj_88?ݼR3;P^U1oO|_?b0LЍ%\FJ0)UN@ؑgc\}1wWWJFz}l>\{N/mqmjAR_oˠggӫL҄Cnb&2I >35.o󴭋yr1¯͝o7ˊbDrN̕2W$*F7ӺXzkPHLu!(9B h\'4icMG%h󨳇Y1htf|2Mq,8'7'~oTl0jnnp}O >懷?O>b>7?v? Lk蛓.~2 x?{CZ547 C%YSøM흾cbژv[3?|~TMig};fQDž}6jU.jr*قݕE"j.Gu*|?l_lV5M$8C/FD E=U^bѓ'&6̺9ff#鹽 3Wc8y\80+L.4Kxs9{*s G!&GI}aۀW/NV=h?!PX&Ri A\jHVr{n`4s/ -Ң-& ,C1(>_W|7{}; q ^o@~99 x$ )[X1c^ˈiDk45[!-aBo޿x.ӷ핷A^(0$PQXǩJJ,1bfG&) (jxT~8SH/v@ ǩR`3ZHEv XI[$fr[X<)=NnF-Z~} nXl9ZV :-L_%߫T!`k8jƈbHnr련LuDja>%QU2 %R&| N9ťoEfd̙ ]2nR" {[ˆh/}a7Wܬ6Uof8W {9Q4dO)-$I4* %^FĐґ0%8[^פa iU30@ L&Gfrr+>rXPAs_f<VPg($^2#YobA% 9PAFbu8Ȝ9-s>9Esb wi^`Y8 :fsp̈́{s>)jߦ[ &c0&;2C/tJÃ&`G2K)b؈ >O D01vY4a#4)Y@SJx->c}_qc*׹2w/t3@H$0P Sh }='EmZwt:?:\uOiy7Qy/-mtEm'^Fkj4TQ8b`$MQaNERQ[Ƣc\RSX:-fdrAɶ%VuC\X/W^eUp;oB!:cF@1gz4‚Sg=zV 4 wCܓV#Ky':L˕ǢӶgU@JLƫK҃g]p`SR$K>iCyƎɣflOYO8wIʔHN MN(e#bBFH45y^kMTXK{Ki:09J:&$Iexxx}&an݅wﺿ3I2QI:2ோI\|V:.=<߭ wl+un4z^4y`Ra2oovw{vw @_a϶-{&mE=KOs4g?k+hw]kA~^P#F{xKme@9mJُi2?6J|2%a`#ħ7>%b[Gd^՟_5%8Sׯpa*pgiۧ ~7|2o&i?4j%/QsZ)EM@(K685f 9s'諳(~]k….$Smge[ǡmUkmwk&y>-poᘇ [7W]\ciXFU~ ⓒ=DASːwpEPᇒP)^o?޺[٩NsbtWbNRj[4ӒI60%"lW;aqzҜK1<`49dBoɗ|j[gYoF+U `o?|Zڒ-Jm|,SfP'gm4d5CvCIv6z!%sؒ_/)vE6J)LXI42OrI<ə'9$g~'# !B.'!f䌩2OrI<ə'9$g̓zH̓y3OrI<ə'9$g̓y3OrI<ə'9$|s+gLήdiwFw}'ykgd,s>Ȁ09aDhpSbXP;L;|EpFI*N/Tj4vCP q;-J#+nB!L\N`n(/F^2{ }:\K$ /YoNp+8G8cZPoE™Hb!p@)aBAC,Q0p $\! C2 oQc QRkazqyֱt&Κ w!H֔qH- ,jjV(ւ-h Mh|irfB*\=6x6bu pK28e}UD&^E8(RDhL0¤Ax _R>YQYI$ Rf H"!1O8'U|9OC1鲎푎K}E~UڛNg du:CJQ8cv_q/g솚qS>JZn '%ٳr-Ԅ$S> –<M".a˩Yjć4֋se*E~--.3q LIn<,&uZdqr0:=Mvo+/ƣו98g+&LAlweu[;r'5aiO$㑅H>HԔ1тFQ4`pH~u[Ru[ݺ-uyl v5Y_Gifh}퀬@1-e.y d˲dKc w)G[sgk0֎NZe$Ղ+m(Ѫ6 <O gͼG3bfܝb򬳆ю++3+moX'|sbj^]L;i*}HGM]}XYh%n.{ 8VnܭxtO+r'U%|y =:ٲLWtFٖEh%ܼ"E[~l.ܲHMUJKUli]5~=mR&0iJTq`,4ţNPiiNBtN2z*ZIVO< @y5(jWN:~l؟?P¢OX샖F5,2 獰S2ctRw$@ˋ 3[}26n0l5Lu՝:~a[jHV9da4 ;fxSlW3eƮl(E롛Ηrz֤.mi~ %%MuX\'8x- \I, `»-1VYE 62( F刊`) NPJ2]IJ5^ x<Mc܋կQ?ŧ~E,wYR(ܳL1d:))ϘskJu(ԌRaB"( %rkpW>F,ֵM謣!c1`Jʽ6U.ނ( 3Bs4vϥ-wI==¥g|zS' Ҫ̞熰~՟ i7ɂXhQ2e !JIf%;4SF.CZ %rR*;Nю*= -14l=a/F}^${&*ާ"q.c + Q) Jmld)$Hs̃D,Np]8N郉-%LUlK+wFbj.TarQaD<-JJuPm\xaPoՑ HitL*"+RDˢ L%SMEBc]V`L|3`DwxJ z *#UL kL.s@0YiC(e!A.]RDṸNA'i440CV/Ẹ0!^ۻ٧rD"*f &,%S5fX B'rB5\0U z0bG'6骇CKI^f0|6tI(FH&bT/hkݛ otg0StߝïN{ur'?|37Ry6X_6@ ߏ|DӺY8<ܴZ=fgJ&x܊8~E_Dg!шҠkB4z$$׆Y<nj$=wafJ}LjÞn%x)fSQN2 +R3h\^dQgw6&on7{Ķ:uCԥ{pU_M=h$uI$SqKpOo \%a]tɻ3x+pp>_o&["_~c9TJ=cI+j0oEYrfRҥJa zQI&U+ jg$xCyڻW.:&'TI:p MXALe }\z3z:;y-pn>+XsCշUSwQ߼לypy: ߙ^ze]gݨ9aol.xtOf߃lwmHW?S_ (b˾ A)r7Jɲl]lQl'ʰUT&Ɉ8AFT\/ zV IRÑ\!2Z isQB e |HX| }΍Y8%, # Qqn1 h lLҩ}e*׵9%ZyBiwԈgO\?fP-fn[/q#{j'ԬgԺR}k`ز2ϡII侁DCz27Ζ!$u5~ziJٸ-v6ʤLHqli|H:>]0v~ ki[}5@[+}19]ʷM R2: \pܧ2D=O'(0Fj9SuIb֋p,{L:;I5/b@H#iA`-sXQ֝".'RLU^sو6O^;h6m6DapӜ˿]G/4M{\{J%eJ"H!t*d!' *r"!c6zI%cJګ#ŎRӭڋEλ@4чÜ,FsB]z;¿׃/؅oYp"?xBQ`b&$Go-F-? %'ВfmɀG-GzA{0Ȝ4ܹ= qSmSowP7z{Cg,VH\X)֚<BFs&Βȧ:6z\=z"茠 0) dEVF'%U։4{-#O.Ghird.r2!K㧽.++.5 -aӓC fΖ\O%Ǡ=+aU'vHM'ENtju.<%-)}BL=8f\&HLX+"_*=Dtz|A ^&ȳLX8'3+!(PAΊgW⫔'8s_F_7T}m{[@r+rӃ}C%0)l6,j"z< 10*hF(Ѕ@[VΑ ;Ӵ$JbVO+bU6~4K]i'tPӆ{o5]&mL:FaP*lss{14˝]KVS;U_;xF-Ɂx!FED+N@pQfЄJ|J}= *X=SMyie~K\oҴ+`y|DSr'ꗱgS`Fl6'|V77]A///Blekm+f/@Mhs B@n =@(d4FIU:҄ hTd.J JBA2׫?\LUؾ9ZU)󡛴EG>ߏZBuNy{~(0+^nRn)ulB78:%L{jh/n6 ^fs˗d sͳ%t\}8`BBd{? Sz_USSO2aUA::YqE^YI˭-1r¶T5X?8 IE%͓cw[zlnhD/JvrXңΜth9ޓ.&k'b#w#2Y0"l{btNOx+|9L1Cr9n^s.TZ1֥5IV3 ~͠ip<᭴pRamyP 38䭗gFe2п&`<[*S"7c@͟Sjs ^W5ٚPb^' W9_gQҿdv "IY*1n ˔b J$U_Oܝ'ݾ<uxao{c7]>^Ѧ?^1X$54B;kS`IQ0ɌBW`\x<{FpsHaa~Lmw d[i8y~vz0Ǯ嗖yqŋZ4d4+ߏti0W7ce@.u fj!%<t?msƫ/Q?nb~H+^&٭x3ZguhɈ/?"w095z䡚̃:Q1 sK\_֝}Z|Fr'Og^/ؕڼ}+Xu qL# ^rm;{[v-.af c )z4p&la1A 8;N1y> Y}qmvpE t*&Š0&8(Ry)H Kis ]#ӊ,3dVZ ahzi}PFrL99p".LQ4wfJv{کs]Y)=BzزL0mw|n.Ϟ&""{%3ى,H t91ȁP6Q :ݏE,5%,IIho h=ѳ*Jr>AI,Tݑf^qEBظk5&|eҁfB7!>b:MJaS9 []C2'ٓ $t^|\ގs1K$dR$F)@xZTD@RGJ^H& KԹ"?e] }o* [-:i>Zx#+.k_ii|^` FYYS4|$A&H&>]-ғHo=^b~gz T3eHV hpD1 .<`SU.c٫uipalu|H؀893:odbj[06 rӫ}6dʖ=i 5H 3B߆ɱpkێ+^ԇƶGZ'BZ%xHk*~.(,rk- :) c9(t@VN'+!hkeY[/z!+QK VXNS@`$29m:&YX4\׭4s#H$v( eH9Kftˢ~άj_8|ƱGGAGaYOfHs Örf2mG;R:XI4Jcn-CȐ{D4"Fؑ^ݭ I;*H ڼPH:>]v~Hk剋I[{E[+z19ңM R2: >`-.CDs'9^ac0 nWMZ>^;>(;!y=KQÒ L[!wё 5 %-g.egRz ǤY)&ȑGrC]*nZ氶[;E4]NZI^\JpT0&Vd&VՌc9 d\̒ R= K _V`S4H\+Ȥjkj# a+J9.63 W^ YgY;evS֐7?[݁)v˧BAcPɬ5Bh4i,Vh3yb,Oʛܺ>b˫vz=0Q X4W \&ɒ0ȋH¡!@BH Ĭ#C!a/{e ɘw1cA뤕>eJ"H!t*d!' *r"!c6zI%cJګ#ŎBUfCnǻ"pnv&EPLVdr^s4YT# SQKv"N&uL8VW2L4-|{qv<7ULkI0e7!J8(rTNRJ铋fS4>KQXZ?{WƑ@_/>Nv0,B#IJ9xHqbٞ>:cRYX{+ J3)zG#Ay'= ʏexQ Pb,gM,tOAuHgP\He r*&ph5 sq/:#I5-3:5FS5FR %FK\va̒8ç8WqeJn`9Dŏoxc뻋xxo/>vLKqkH( lE/[fMflMۂ(-Co\ˬ-pt41L%!]~ Sm7ĵ=Jr6cj"DEx*GAq3MStn-"e+Iq2^>- z&hA'9OhM{mus̨픤}62//cy3Yo"dyG(M>>\ '} E̋rIG#f %b;b'7'ev*= G ~?U~!ҹ?.x;* EA@(Kd689S<99/1w}2>2uDo:Fc܎RC?. 2(y3uvj)7]/!W'>jbǁW򚴠YwB `֖L[ʢievyֈ?YMNRDwRΤJHX:]i* jX֚vOw<)1Gu.L lͮڅ7=+Iu('UZȀ.&DJP*WW9tZfMϮ͢B(C:TծG5 RV6y0WW]mg1&k~`m YUbo@.|gAWLf覦+˫Әl|ˉ|wOQhnQ⁎||伕 Ƙcdto#sgmIʑ{%gS4awBH{P ]&o"V@1_/,nVAA*-d;T :~T.<=[Ƣc\Z\x ,f1grbXT^j\X/]ڥL | r B($S1f4j|sWL#,8ZI}Y+ 政{7;@[_˟ܴ>_x,HyA@=brgPoaRΖ>F|mdQPs'T΂#"oAs!mݗ7ݷ7ݛ7+K8ňAˆ^FA'BXYL^jʈhA #S#2&"Fe˧߳O1Mu7׽ZyqIZK4VLv\^+ɲo:9J5&m: IBi RQ$RMZ1(] mFGyc(P +y̝v (kPƽUMf/hlF&~.\>"Yq!M x=zg.ʺg0W`l 0v`"ALWWfOjezhh%ȧw*G1郓ؔIY{xΌ?F<tv s?ߎvJQnlGӱL V2 Rb^ g@f'%><qҮ.B-Ъ"pl.[ <p9nB6Tn֯}MaTWo>Rꉪfe&fr;K{ doG #z}z.i1ގ GYWzS2=pȫ Vf1UD&U ![G Z.uR['%bIEK!BA^&Z+Vq`׀ ʺDXWEd2W%<Q25l)z jgU0u?S3[lU˩G£"ʘTm6hIX%]%rI4kI`7deOglrA DrblbFaULqM1q[l?r_vcV6lik[ۄJRiH*yEF:F3"A/2 1|H2@7jXjZku(-RpjҔ-Xʊ#j M6pA4cKoZMk5մԴְ'P5)C4Z~sU56XiGJLeC V)uφy$u̩Xax) 9cs,oY|PY]9%|;̨ɎRB0xF:D)XL)H*L3LYڲ!5dž4-iFͮ*] / ro2Md>o@:HQZ3im4٭9_.M\`ZE 6L 'B5:sD*BT\f @r}X'ENJ1ex_Y7I&rC2g$1e7BѨ3vdwx|[@Y΃DHR H-g@VHr'Дs1a 8IT@T`>*r8݊[LFWz 識x }tU}֜2F/a*! deԍ=k+ N3I:m~zM+?_\XkΔT_㯻Pq">v8yk15 )h.VSnZ,mʆcc˃;1?` }v}$5=1Ϥ_|b RD" BXo$Nꤡ-yل{G<t0*]V e :=XbjCZLcO~cqĹOooX~6yefDXOkg͋n@\ȕ)}e \'3>F1?Ct|秋cwk =&WfY >Yӳ{*y³%/2W9_a'A/N=_y # hᓺh)TBޢG(3!:6\pFtn8Hk"p I! ]@!{D)Q)FXBU5к[KG7 .Oȼ|Nqc]_,3/nǴEtLw*FbE * e>v1uWgz|VbnIUg 4Eud}ѓja%2R+; 6?P,Og)+r֟~at|,1cP\K,-cn+]i" ,%bPZ^R.xz9r|r^ܲR}rQ3qv\]ZU/!TSG_^uFIh˩?# ; ~1[[ a(c#6˕pE#HZVZe,:oձ[/>X9=$4V`,>45rߧ,L|WϏ!\=L`k4 q)1\=LJg?\\=S?$s ayRޚi,QsW.YՅwt!gBTIQ\zMmR>&!FWEZIXGG։ibJ880ѨMFirJ <9z)f/M nzM*@Uz>8 gPf$>N?GhcN7WGV5&zŗ譛8u>~,1͓"9vև紐wG?_O&\:$ZHE|p O,@Z{oyhm[Ҥ4iOI8L4l.Ks"5$-rJB+1‰"g=9(iB&ƇrT*.4( 'aҤِ4i[| I&p\ uf4"m\V,<'oMτaH8 {5 %*4ZGO"cʂԁ@ 2٨wVEKQp&Z 8|>`&EGfL*q kpc{q797-K 3wRߧ8|m((x\ݼgy{d'p)5Br4 a/0ȒF@B> ۅt\HG.$ RXGpСԈ@W1h \d$ՠwPo 0QEt !:kqA8!GWM9s3aژ8hmr`9K q=wrp-^8Vl2i焱6cgl `# ū;«' )m uD MRӲoFdص9kkmG_E%|? s,;3X఍99_Q_Id; N,Qd*,V=BvÛ#Y8HkOHKaK)7Қ={x2'Ԯb#"`,J/mPƙARLz-ﺾg[8f-wSj$I1zW0X VP6p"H͐/[9{&g g U+27O,o9Cӆ+1~X~u\˭-X;9h_ȅPO[[|\50QpDG\UHKpD-Ӈ\'Q)d\ !ZhWZڑ¤T~n9XD@^Hy$sWʱO { =Q}Гz"TQX037y2[MDM$QZsLh -E EmtdXDcZZ H2[Yo.47$+,LSM]]Y j 2MYv9 _kQ,$:X&eɌ%ڕ0\V֑Kol힜ӯp:I4w+Wptq`=jA܌/}Q+1y#d֑~]hk _8y_Ad@ʹKd3ا@ xZnApԽX,-׃Z1- VfArA ,Rg)>3E4^JJ8Rp )XZYq$ܤ Ϝ>][rHo.s2 Tӵ5%VW"[ TuџLn>Խ\\6'$^0',^B;tς Ŏb97, //OL,cIS:5@(Dfms$mUNNǽ;$-` tOm닺YHJ*ʷ?;Oʔ0i$r$ \>v(Vk\.N`e¢DP[p$4vPMct P[j={ϼyTef֘R "MCst6Tl &DDtңvsh\x6I/ o ko@..;6S_aqAKM )FX)Rz1ă 5ųgr6sLp0~'|{*Nxc)z*ZWGcWMТC3¦U|^{rgZ0.I>\f_3pQcb)#!%'v*'S.SCsD_LPl.  `8J7&e߈s^Qd]J,XYL!ABc$`)xtxsb(k:^R&g%!%rg1rg5*\1t*l#5 I$s0;"E Rr&8T:΢Ҍ;d p0 AP$@2>+l7"TaO9nQ`c"% <#Bqf<T>RZ-t./xl[O%@k̍k XJ06V!\ xjV#|/Hy%˜#{cy$) )XcXZ"6"О"P8]3/˜:WD2(Oie#k}m\/a\sg攢>™`+8çeig)ij6?]J!O2h\OV(bdZSX2viq $'B J1u'ÕޟcPW'a(85 rx>M.' 0n c1qQƉ|Ňud|Ҭԝ~ ŵiISym?L-E=;RϣC1ER?ZFsG$8C/FD E=U^bѓ'&6̺9fFҮW:lC\q8p*xq`V%,Џ@M<`p$`QZBj(U;㑋:Y6uʟ*OПмT/Meo#/ӳt˸_ڏR 7z a\a/3x1y sZD)Z2li%%iC>`k]%DN&vMFba"sx0 ׀j4k' )(=B>{!E`/]Z B4G{ʁ$C=A%B"kG7,K K2yyÀJ.!>(4 #Zuu @Jcd{5G$¿|Rψ>[9c}*Ű[)rs"g|) WWMJ s5ec?_og ,Z|}'M6֓U%cY[ʪ7p7Z001R:(*#0G"DOɣEjjFd>AP=)Q[MȌ ^tsf"(ܚ195j]3^ ׅ 9 8M_&7tڍuO f8}7i-`B(lӈi/%&J{5J,Ʀt7|K(Ij yI5EJGt`ZDI{095U~<;^qf{#U K f#uƩD J*rY;cH#EV}Y1 !f`E;$('8t #ȁQ1a6rƨƞLP4b6xF$5"5bs(+0J=l QzHL*z ",iP΁ F53Gc9V;6ێz^^ҎZ¶ k=L2X4*j \Uwj'۸DޜPh\]g747fɯm5 ߝM;u{ \H}ֲ4m[t㣻D;x뾟:}Q\7rμ_g&hc1 NzMgƷ7U=a3h8$]$[ ((LoG73RoFwUݿ^ŠsQt \=MӞUohs.`9Vo1KgmIr)ȗZrV:AfxRC@^Դ(=S+C8|'+\Wx7e[cR;ĬW逰 ~sVk/;^sXeր0O.C1I }3PcP11K hpX~sÃ]~n~>U/UNJS! BΜÍn jO^ԚC h7J{44p 5H NRkL[Ƣc\Ȁx ,ΔX2tixZ VcoSu2񢫬]:eUp;o"mQϘ8P=d:FY^1Tk%i=z%oqi#a眴2YWIZi 3/ kmm0PoOPJĖOɀ lj[ dG[^2mw"OpE#ä/% wIʔHzD PXGii* ky^kMTޠ[;)gru>B {vW3욪Yx3̉\IZ=rB%6 OI'@w1aN):nJҫ5ʬI6GJSϳͫ]5j^_y3fJ懛hE}ӻ y7,g[ Wfqpp+"{XsiM#wg^.K#ʜ {8}˭d3(6K9}jywg#HVlKRbJ[kv_]; k˃ΘQh 8`c\ Dsp^{cdVKp3Þ " ┕K)Q k)ěap~q RUwچjtW5heQ]㶚|e`DT yu\1<[xf9rœ{6tƁ›Ex[-.XFmoC#b5V vz .9=o{fOFnf:,DNJ|,zjR 4z\#RW@hU"cQWZ)]]%*5TWB &+ xp0J2~,*Q+DzJjM1`q<*j):xt\`^]uaGm xU"cQWZ]]%*yޤD:?CӲrTgB0nHIX84zhv}q3Oz!3Q茪3OÊPOߒ 2_MGQEm*oG_g[IIo+ 2giƿ0b*<׾y=ZP13w^V}_nEip} Z)Bd|3T?QX^þow+twem$I~`J0^{n`1aDFfJlSZl7xX<ĢHV*VFeEf_ :݁e7|Ws6(3RM!ā@H`־zZvlp,WrlcnRKV2:aL}wJۊ`Ł=ȕ^'ߎZs<9':O3<AFuU쥨+vPlCu%՟%>ށ,L@w?uJl8 rslyU Ldz[Hᕕ!L \‘ ǝhddc E0$_?=z/S .|_./0j묪F?{E:fWMk^c^^c;OĹ>zff^R/ԉZBtMFByP{2t@SPV LTXP"೶-%4>xܖ|hg^p9),O-38XA![M:]PZV&3k&eD^k-Vۺ6qr`@ 8ggֲu+Ү_ok1{t|\H~=>a7*fz7~_7QZZ(d4FIJXJYrb,R F LEf9|-𖽯o>~X˟Z@e^iB2U`JWCBk75kxeQ A$"I4V% "awW)@B$*[:[hODƮ\-DeN^6C{Fcf|> ]dի}kd@,FY SF)$mddFap[`\x<{FpsHŤoy_y,cֵY֪Y4= BuܟMؓ٪BR΄++DžL"v.\Hq\HqD.8 "z`:h9h-rւ:Q1f@Z-$ϥG9һ5_\̀^k̤I"Bjzӆդٮ0VrA '[ z c >n'DKqw<-,GTLAaL6qBQ kirF.$ %,كt @&ݭ dGP!Ҋ`GK"6rcJ1͑30pa9q'nFdAoWt MBII#I7YtCԈu˩7$YDSwd!;8gcEDžVe@Z)x,Je)GEd$N&&%T\BL4C_ꕺOw"_c^+c6=T4MT}\C?Iԓ7d'|w0Wua7z3'v?(ƚFRq$t4ha= lLZ? Y,Vr8MnE|󨂭uM6=+זqҨld|5/fzWT)?Nǿz|SWOsrprsbǿ*?+߿o޽ȅ~|8',+M$X3 ؉w ?u}[CxqX?[-h~Xrnp|}[7lR}Vē }skTv/G3^~%_?ުV |xqR)@s(ONڬ,3Ƅ V5q6ǘ YrA'akcAD( / \+Ȥ֌Ag( 5x.\t'Ѕup_pNQyMY=]NdI#F#P[> * . ߝ5&<F0uʦ6_zXjl1ɥzŻ1hϊYdK B6]Ys9+ ?ƺ,K}potlGGL>آI5IvOD)^ݲ\Y(dK K NBPjÉ6`d(WI)q6[l7TvRM [mZvEfϐ%B$Q˔P<8n4zASǫ6dQ{T;-(ȐzўG&DedK1'Bxt=,&f{:9| 1SK-"+lY" .h`FtQCHLNSH@)a<"!0W"MIC8 $( Xv5D%U܃R @{XXd0vuervqL:5/"gwhb|/ :_~9T^M Str}L8?\ྛ ?寞)ٞsp9>xpD5!68рqZxI`5iV=*Wب:ϢMT.-J1T{E?8;ev:l$Jg>.w].Aȵ&ܝͭ~s7z }.j-l`sfz}8цNm6v-l9 ݾuƛ;:+[r|k]2sE-,hX Oo}>5g?k-bf!ߵeҏznu+H[$/g6?p%k4=wǙm$%LoшP=C C hvL؜£qQ@"%.$DId`Fp ֪)U!1A8 gRXfAQ5xΒj}iblvO&UXŅy+ U%n͍#>iиmnKfLQLHY/4X0#63fѤڹs^[Dy-.zFӧi5M6Z=| .>osF\B%P`lB%KDV|J0TB2:#sVcBi5n +V5W\)&gdP`٘,s1WYGW(%#wh4332WY`\iN9g0K詛,%\Cse8k}6 ŕDr~*K)3>aoe|`1 峞Tn3eB{0yhzt".D_:H}ƓG/gPz T1F@<|s-[|"\Jql̝V 1_1ӻ{#ݤ8fz7)59-3-v0Ӣ闾zʍ ,9sŕ\US7W(wh.q~=LmvoNŁ';xC\S8/ ,}Y\s6KJkQ.eAjT gu Hh Z{P+z|Rp-*K5+u^+!6(i 'rFM,cQtȅsa=76pyU2 L$Wψ&1ϔT;hٌB7K-#^\]խAGfO.Mc סG3?f % rߓ_42 0ǃ@$SBe80(3Y`jDzMI*-G7./M^׭Iav{fYNlÜ8`T0 M՛ԛńyk{U XM&qނbp[AU [lfX-Q&L~a"M^B/[]5ͬYxȳB.XU)CU'iŔv*tP;-asmpලV6..ƔVZSr{cKCs#:ڳL gE.G[&{?Ui=4%HzhTwmv;1j&dJ>Ǟ%U묋NYXg!J^'DfPԨ8Oy m}zn;A ^k)!ex pt>p"7Eo 8}eJA)ytcۓ౯- YHa۪o)Ԡ53HAR -WZm?0<>WqnnK[4$FH` G@4^sJ>Z+SML pam Lpܲȝ5$PI ;Nl"rgQl#sIKϑkw'ifYqύ2ѐ_OC*68 ?9&pv^X'O3Fc0K&RёȘ4^@|3,e{CErY#TcFxI.Rg:EKbLx-*Ublgq\-8zwU뀭O1) گ KKpEI@Z x5ǸָDpI VH 8乴v T{w㯷3`Je/L(Em}$pPր;m<,q.  Mu8[= USNGS^Z".x $Bw8 UC2=3&\rTش`|affBW!ljJ;e𔪈ї:᥌GfZ%k%ŀ;#6*iIbj'F3UNI"dqp8F! m\௖;5UNHdž$SE~u]1~ok6vqёfiګ>q׏eKu7 δ8n|Br $ď :ߩə16!Z%h^S54یdN^\)} v:꽕M]Ve-%QK>~]C ?ƻ :߮c3vy+BmاNg8ȭ>^WɻG?_zeךOry\}f;?Bz}| 5Pڥ2Oۊ((߽6j,BB/ @ib8XCt=^0bbC. /V^%f#I{fCJ%Pf4ƀHab[tW$%֢E~>B!^;>FiPܑ?Vs$4|]o_Ljз+ nBwFO%5M5h'SUB4FOE?xSZCV_cT6F)D;MMMsD/)P5rL+vϚ.d"HI"8"{tJFy+%K:*(Dq2TJG;H2>9.h-D<( '{9RD_1q6hQ,m6:H͓s'돇?nUdJ޼;8'-.7A=v^AOxEQ: `@Vkτ d8B(Dr;#CQ(7{Wƭq6?_C/kk{EUے+7]R/K+[7hxK3L !!AZXtH2=Mq,ؼX[J؄ 62FK'8cC,V¦ wa.\D_VJV{>|jJ{u5p{Z e%!T$f*Ag)_)kW&8aHֽlvk!6GsqaK%8 A1iÉ6`d9Юm^Ejv~R.[MQڴCN]B>ŕDb"qoZ[+- >NɃF4|iIIHeӂj(R@-kB$XFD1 XģRm;^kJ'=/leDd"vԽXPGE"etj@ B4Cc̵FSBԍNM򠀥3.8!(Q{4j* 9Y."NC\퓭5.y˸;\pqi\@棧NAjiRqi"YDARbt .&aP[ܱ)Bx@؊uZj\i 6YJ>Lgx0JKDYcҚQI]0%DRTDbTIP{=<ڄϻ3sfe2SA!^ %RGd\>e90"g.S5[c0eULȔS&k[{T2S\%;!JrN?QvWGʔͻZh9+y4ˆP>/ ǭ*m,sQf)&c #ԝ\Oō*QMۋoM}'i(fJDOeq_t 1`lҭa;3<rwLE]<T[u⻻ۋil1R5!~UWnuHTnpI/&{/E-^-]/j-nFL3\!>Q̻i<У6{׽e3[edyN.juQ_P䞓F!GnX }0Q,ȐqBi1 ǗJR<{]Y_.|oEg^~U滳wޜQ^˳7o8dH [V_4˟дiilo4*SXB_] mvyA_.ۭٞl]ި<>IN~Y("?Aq_Wr>MU*%~QPAfAh#e*ل_哸6KYIang$q/xb&YBB85\Ø5EԚTƂ1 uJc[ƘWV}8}8T+Q&4$#ricNsp \{ݷ};iPgonW5W컺imv,$:'_2FȉRWY̳= KƊÐg 6jGG{R.|ڎ%>9^ XB e9 Jۑ;.ooGaj*ruQQ& U+QZg&8Cyſ*G'TN: o#BB|7o6Egp#| O۱AǑGɱ†uZR?~QmE]iQtxoڏG5b_~#zwtP.>W&'rc 7^66S/Ղ\e5=5WH!l*SIiWdNi)`MU&CL-}J3+%!IW)M`*աR+wTR3+= \erUz ZC^Tvp,p.=\MLB+fo7\6:˩XaRO+D ~Jx}m1ENN()-)(;eTc4Qכ;G7<#L0C&Qm[tV}LfL?GVSmޯ~pn͸X@;7ȩ8 !Avxs闘2C_8U"5aKJ-@ Ly.:ꋎI~(HYnYXb+G"_'pe-F 1ptyǏő 8޹2TPTz0b+SqVjUN'_ފcՔqLXV9eD)<1 +9VY5.ͬr'ADJι+YRɈ163+#yLĩ}rAKԆ+XaڪHyI>@FSibWH0`*kF{prp:z>p8c__" nW?|s0Xrku/0f:EDJ(C Yp(Q#ǔۚK]LC ELX{yFLqMD$4BhS-…(Yk-ޓfMμ,, 3/gb ۴RiOr&.Җ7{~"F痬WJIT bi%'F;n7zMZ=ʭш({Fl&-Mbx+Z R^IflT^jmgQ&V* D yt>U6F)D;M 6ruPHY"!(J&F8$ɖc}FΚ1l<v?<-&İ6f^C_*xiyc \#OYƛ[}%KupG+*Y疏fU!,dp./j/.FS%p0XRrOY(v`В uF#vigJ0aȧ pnGgrAOoV}Tj>C^nU꼶'9O>{ޘߟ˛6sZIĕ@(0 AsAS6I&8nYN{+$PIB7{XEk.xPLڅhp`2Qf$E~'M./.VK[琀gx.qN3,<0ȁPNQx]tf[="ǒTޚ\T$KRj4 vXaHKT|)H))h<|HXC-#fKtC:EKMv3lRWf(}!tCog,hRWd^[9H lQ }vҢHZ-)7֫vb46[ccIvh RLF)ǼpLoIT%r퓖~2X Jc :U^}?!V-c[epfPLہZ-l>Q;~U Oo,ӥ:o%Ӯ+s9>^XAmǴiYfZĞTٺh}4QQPZQL Q& HVJ.:եzޔ@[aSJ"DNB"7&yʹәh:ŒnVќg>r7镤Y6F97YNkJفGև'P$?]:kkǎ=;c 'B_sO+K'Rv)R|" ruc. Pr5 L'=~a}E8n$1u]ΐb@[ρ+Q'oTT>Y-CHAf""V5f\H}cbYPHˊ![; yaVu&sYoB7d ) Cvܔ%-E{<2]VkeuYckO1<0(kMjTv ^PemK|ds#xAT7._t߶]m6<"X4L{xVNdlDCfºG]Y vڥt l3=׏A?>x6[|ߦ,?),]l4TӃrj."Y)O2 AjD"Sʀ7` FSfa+Pl<,0<`%a YBlVZ+nY87?gZ2@V"bͨMXi Ŋca{ϝd )B_ܔ_Fn>% M6/ DBiTRXA!YrX$o& xj89?{,dw%{q-+Z[~'gmB%ߪnjU$;U/ Kyݎ颼)Ka;H#̫p-Y8Eb\4 RԒ=S-ŷԮ@]--ਟM9tvڧuEFAxEJݤ̴EJ Y7<$09i1XaFhE+Tbw'%9-*T,1;މr18miڠKI R"idX)Bc.f=Gij%=SIW N2-ѓ" ѮV&NP}aq6+s+("*SJa C2 d\)KP A '{e[/gsEDL;29, w^ }V;H'U *uTW6ec< "zb2##6[\-/M4nx8K@}Xf'E|͗'L=O#ݩ3Iۢ}m`rrv-uC4цc)GX<?u☖^Ҩ\B/k3kݛ6/[vO;1م0'0Gqr\lß'm锏4;{ز,I{:Y֍X ȥ\(IaO[|,XVb>>'ɪs{lu\gڲ˙4.8׏ұQ,>c4 d=;K{75zM ͉ެ?^1MǿO??~:C.}<8`,`ݣEQ8nu??kWko5뻆+XPg~~K>N\Xn[K?> .,=)N+&YB \S/&5?&{qvVݔju$K9/pd97h2ٞ$!J:xe9^$V,2qٛ:H$$'wF$rI`-*DL+:#w״.882+ɒ,[hAzQb N*NeN+^JhuT|E!뼪_z%"nelRƍи;W>ŔJL#d< MA4,`" uSw:Yart#O1o`fɳvmZ^GƎ)9c1 Uɧ(%qo!jiƔ4][TdT:Q<83HYS*'N@Eak̦6DŽD3K.֕KA;Vl/U]Ė sW,O4?) lJcQdKԵ \ʡr&ǒBËZGp:LW]AjcSQw1U3yUNq 3Zg}37XЎ>9 0xV/wa<[vԾ گMՖ&B|癆O'G ܺ}qJO?y'67K#$I͢t*B˾KWΌנDfOfNZwuUM{`>&meVn;yc SN'A! \l> ^Rޕ,Ǒ#_u!Y0sr2,,&Yl}pͅ H&Őd"F "u$AeT(ɡEc38o,gkLڛBtԉ}H%k Ǯ ʁˑ $+L EAjR;N!d(F*\t>N՝rAn: .~Sf{}Ox6,ſ`G`qdGoĎ4? i&M=`%VH;G2cc u PǓ dB#)&9:T0foj<ձlRF. n-'zZ܍ma-6r{?z\㛶o^coD1%Нbޮ~x{BID+)" ʡ/u!8 ~*UP]xRpo ^f;fݔO;ʵB`Mv^Y#2^L[,}'a5$th$HmR PɛA*tD]b+LY9Vr U4[w(mw"ġ=bOUVY$m_ ^#m/'t夕$<\͑nEmc|_P+:p23lX&&c]]l CD :zٶX*+"lyi 0GZ5PXJQ%I ٚqI95dEQ u4i벧 ag\[mϙy>~n!-f7nC谼l{rOX0H5b4DRP *d2*Eg  ԣHQ$4lėzxse씿ƍ2j%\TSBGUŘm\Rma+8i^0~gt>8A˖-^fB+/nb ޸BrhKp;e٥t㜷ۼܝ ϥgˆ/ܥRJDnZm;TN&mÜwyƶ p0ZH60?\hE8Ѷӗ ޠ#=//o&/ ֒66ǐ2F9I"FV%;S{՝$q'Hz"7vʸ Clצ߾ih앝m ̵݊Č|*Ylt*|/E?[o5{TǢ ^I'/'HmyCYbdTZ{NȹI56%~ϚPkJCߕVV5䈠Xyurg}+0gP5B,<ΤYؒ5^՝a2O.g2dg)(szrry}yΛrv/޿<^vYӄ3Ԓ#E6q3"") Zb4m: }eKl,&kKl[C(0J7"%clqh)5u7#پҭ;ܕp?)3x; X `2kmSBr|5*dS.~Bajped]KSHĭ +i߶SUצ*C Y:@'0ُ)>O>慙Pd0)4&yAEfmmS˵+F!Dϣ푏nULeMlWl W/fC@/V#;ģ<]_r1|Asz=}jhYl bͭЅ<>M,[<)=N:CkWkS}t.ӽfmJ Ưmry\ךr䴍q\H6d~!y맃ۿ-ywM~n52=6ݐG[7 {g0O7{EZ>RL&睓tݕ}ݠoVȴވir^i]ʞ]4 Q(v kq2(Vn,JX* )&=*''/b=by޺S.g :*PYǜ!@ʆm`D ZU5(#M"]LE}!{<[zKMO5u#J;ЭH}ވX"\N @?dؓ7g?~J1X'@=B`od? ݻ/G ^n/9' p])JMDj#{kRuΨУ $qWbK8[5)JmB`RR)9_ L 69L u+[!{rH#2r*VvJFTVGh8d5@/sЃ XT>H5T \l%#[h5KII0lNCg;jvrt>gBK?io78{O7Inv%-?NO6AR;M^RkOS~leē/!nWT~6)IP&T62 ]1&$4zۚIלtlAr5z qpYrj7'n" ]Lb&(N0ft~`MN) әt1`S4iv! Sayl*Ų*jE[:s}w&n!ON~kVT׫gӘK>ggq$O͑YˏN[=O6rչ'yV:/\8qv֜vM}iܺeE6(?Χ8_8^~Mb3+ՙɴNNՄzc`z:ˋ:/bzWvRA6] iЮMћ/Cfen?kFE/k+|f2flf0`,bYRԲcoR~HԷ%"e%:<`S>;'~)YAOnyjS|E#u]['MC^zc/T8G娺<:mP?oWjJ *;9]>wȮן?~???,}?|3 \e9֙`MG# uooukL-?+eps)=v:o?{|2>^Nw~͌9u+㣴 LdrD*VkƛAMԅ8nLzl0݀* ӴukםʡhI/|$qa'1d&Y"nGVisR gpTSpG~*(Hn*NMIx1Hf}⁑\8:#y7 XU=~iOKC8 Fบx% W+QuW(pl=Ymn4/LGhȼD87 0uXӚ Yb~(kMnw6iӤC4 ٰ.35VV 1xU S1]XTrh彳iFv{r&5 I+yu?{}|Sa3aXdTmYBe30S'zzr`Havy>}ts^_T#I .F@\#W4YԘx|Ȗs!(-y'2A"=ü佇@}1ʌ&yPv*m~:Z^]y=%agQд/ \u0(Zkاj(5N[J805kfM׬5]kt͚. 5kzͮY5kz5]kt͚Y5kfM׬5]kt͚Y5kfM׬5]kt͚Y5kQuG=#~ sɲ-&'W5/d\~ )Y95`q! DDs =&אrGe}tڧĵBQ" RОQm!iO |RM Me 5)r #l JSd d9Gbq@cp̜k/4abm66|pϼ! ~8)fB{ mIC=AEOgp:; m ?4@?9+vLl}.POG?S+Ai%i%qh%hR!KnW c$sRIGVX_ã"I^+aT_#xlIR^291xTP+5RԌz=!STc]%yc2u>P3,e.y/+$X^K;+mw2FR Q )[](Z]Pћ3gKJ 2ƹTv=9mSRï`9YHj3*|a@_Zy}>/\ѺXۻM]QgeBWV`翝L^vdJF(*!TuT*S^T ʩR VȏMe,^ M!6kʜ;%9ݬw-vA4Cޑ1zNߛ{cݛTR!ڧdhQ,f!dMUGtVϦJƠr`.iʡdB0E7rp*K!~f7y L?y #y~{qyinfO;SQn_ nSPǫw2K9A*N>Ƣ9y+" h$B!H*spJ)Aqx\M;0rF}iYF&g#}T*xY^h UR!iנURc#cjX6C[upbP_rfijlwWcI)ڍ=Iuj",V%'O> k}F[/l?:~w?_ȍ^[1{d l~ԫ:_Ǡ@+ªcG̉ ";Rfk,1E%_IJՂ&%3W:5Fg;;Bw7T if8ڂė^OkQt޵>u|)#QLSӵM93×Hul"[dT8Ul~ds/{?`wvơ='\[9UJB"Ya7:[JFNBp 1shZ]z=[S* K8;wM/ݽӾretxE>oˬPՙɏ֚gi SA6PkcKGHِ&m]A#"-/\9ZѲU"1 F{hSY9z/|0Ji\;yDy3]M|yo6߭\D_ųW$SՇD,d"z g2v<0*:3_^3=GyiSqD8QFJt65eF9!q !aGy][g #ǛGFgjQBF)H_y #eKFh|Y5ȱx^$<<&]ġmB݋ٝ¦{9_ě/.A7kE^8k9֢\3T] bȦT PJ[)8KUZCcw;X}`,7O!8p,W7Н}]wyu4jv`u31|`r}#-fwweٖ{޼p#]]՚@ ,Rvt,H5U{utpMywl{fs3YF§;$KHb u~oH2X@G'O=iqo_}VZxr dDğlLY8@"aJ8(,0շݬ\x TƍrbSU[?E/7Pma#'UϸuD|a#1i$ʴ~bh}z zp_r{@= l?cRاJjz"\-Z^Ѳ^+D&o߿z Sz./?ni"דY Hþ?8]EvH⾈d;eo&'_i\4WkkUrPp2ɗɶARq/ ~39k|~كɩiMJm/8Zԟ7yQn36/Zߩm~+7i翭rՄ2d̷L<kx@h[3.<}6Ӌ kmjijZ x3޴;!gno`pZ?^+h=me;mx:X;΂8;aLwZ5kARZܽVߟO\'RK29G&Ln>> l Z~6p[m'WzO#\\KٺlجsOYifӳez{K@j5:Knw gg)\2Ike._3,uUr3wκzQ}Z2j o"AƊ;EHWvx2k53ol&7\҉_7J [1/r4[d>IS\Ԧa U蒣SH*eOh\zHX6Ԏ(="bFbkV`Qn

, hvw<-NZ|w8qLkHΘ!"W|$Yv,XB.Do 5A?ȻK\1F]ME^J!,C)9jc!f0i00@Ly3V1M*%sHY(% էkܪ{tMކVhGc;ft 8:T%l`Fڜ׽rF4Z,֨4b\p4#&e#s4ͱK,@H8KMF|e@)ebAD1\ɂC^isWLj G?U`lN$Jlƚv/t9^kY "jTRpDf)4ͫFhSrVY݋Qu/XPKz w (afs`E"?oN%T0t.Y7:E;HcyVSNQW>/w3"[jy` ڢ+Z`8GLv{nL;4{O((;5,Z_ 5$Z\UGCm dJ3| gb7 Jś !u>fxsC < %C\#A /B2S:G D^Z SԃZ056I*dOXW*ϕ+E7SN:lB/hV2Xة -,J1<@J҈Z0⓪``^:P>Zr!s6,ѳdD).XZ#eȍLgjmR#YL>9.J[3)%@&}*@$]5%i\4̵5k$w\M0*R3(QxZ#҆ D @(ǔpa-tXpVօb4SpFX7Gy@ɌS:(D - %*hI%`n%>Z6ِCL*V.uAC_v)&x,̬l*ńD.Xo)`l ץE]ר"Cީ쌅E8Ø<0@]zD'uz˭sR5v(Wum&,n1:lY7o~_dLߩ$UwQ{ 8bR\ֺ9uM;״Q6meɦ'H ՃQ5OI :vJ ;'((tK%%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R* BW6$%l1Ι@ .h8y%P*KJry@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJrpjHJ v~8J R-'JI @KR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^y1 %+9%Pjh=;u%Pj&I @-N@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H r@_ nLLZiEz}N_%}ߝJeY^M.A"0Bt+.#\\#\ZN^ڐp%n6=x-h`ncN.gu!*hs|(fHO6f8u~|kyG5x˳Jf@Љ란dZO\5K4WZrdXs6s ?sjy檡 4WF*'ض/se~3W 6nJ\@seeKjjpjh]5h ijpjhPZ|ʷ̣k^gZӪ+wM>짭ėE GUb:c<7ӏgьqVgܿ5 Vh9 + XWڡXǿJ7+lz){Oasu?晧?ӂP N\{+OM53\S1e\ibZN\5ΐzJ(${t{ > ;1)d~6gCH_MZ5I_6_7r+4kIS76j1V2ǶwVXmsk# `B(Qx;/Gi~ޕ=Rt*mŸuF͇r"\:Fy_b(o˝Zu9tmoSSz8i^tߦzY诒>斛o8pW{~?|ṈS(qD-=b~itj }Lt4}ujh<LJMXuQ1s36e<߾ׇxl wNYiN+0RV>{.~[G\ǷoٵLӫq^xXOr}zW&gxOn^WggF%/~Φ_͛d+ϥk뎸϶ߵzump[rE70\}¾ 7O>jڅ_M!4|^|wn8JHi_ڗvx E4ީs9ZKG߷oRjj])PKZm:vj|1a £n/ObwA(L|:.k:[_ꖴs:*UҾ{ߛ=p*]j{wک|uXm'%-8vᲫ^?i<+]IRub|p*EFU.ӵrl{2nYY3oY{{wMI;&Z>hmhw}^w;O魪m=Sl%E>\q|N߾=pJBL;cTQ||ԝ$[m.LMnfhz`>ͽ-Εfn muCiGh %X06&/Mr¥]R cBoe )mmq\}6ޠrk9{:ٗd\)MF]} rX|j{7yܚk`hvDmpf(z兏/zzq(3nUw4ȃ\6tEhDpѭn 0oZ|O/=->\'tz7!w; Ρ|NJ[_׏.zbLVqbe/^q^qa^qy򩝉YV>an\^rޅ=# W֡EۑkP혧r(i@k94B/>I4lgjCo}gaTds̸N (E gי&eXf o/6M]Zz*ʹ9bk,Ԫxliܪ*MYRYYv~gkfJKat; En#Ԝ񅊇$zM ݳievJSiSiD]LdmXyw6?&IFz.,#l΃x!t*&MQ\l`QRji] !d!t%YxZtLgYiE4Q!͜\sL)E/,+U@78w47t(}2'D|JuC2JҌFXJ$ow+ѵ[Ud^[W.$L Չ*q9c Q#((x+qDNhky(cY*VS2:` XRނl=ٳz+92k B#tFJ)t!1u{B/$Q|l&^Cq[} OtNcu15 s[0^mU\j|9l z@8 ;i|Ztneh`&}*2%e0Lyg* JJd)ecZ2?]='y}lM gD3쓉\v}huRةMv-G(TE0Z6rH_ ~7Τ-iO4o,!?vvĴi)0hLk߼iUiȥ (UbŅ$L+\"%*:e-eJv"8"@:Oʠ\k[3ˆ~Fs9r7 wk{Ů=2y/,gu|oGKB'9xC}:ŦmӢΑ3#XEl`v\Pvhh%?]-ʭpiW2mb`.OKW[C>4jKptsJ]v.B̗yMrm{itR<;!&QJ e9b;$TQ])`Vjͅ:$CɬjUqSU5`G,L1Z+L־w ,,tcrD[z w>uhLo @h)"ZI;Ŝ~lںقcFȽ$c &wŠx쫶F"/l> 7luVi;%[~$Pgx_hlnjrYi&J P !YKZЄFg1qlLZ! ArJxp49 y7䉠t0tt8R:b8:A'Nq|""۠"D'C"ȓuyߟވ #虑c'N&G#[*Qg(DPNv$ (tqN&yohg*0G E5[OଔÉ,$/F_sk"8!bI:hḨ(EH`Dx)qG'rK܃LltEa:&'i7}Čq3=1 rّhM WZ6ŨejwKӆOLҴ,!Պ zPk%׼ ۄ1on=n^z`U+w KH&ɗʗUڐEEȷ&eT1ĀK"2LѨ7MnZr),"9QQ.0@`Ӥ"wO8N./DJMƵr䮬rk]oyQ_4Vdz+VP.y(.s E)tݎ⹞'KP s[6Utin|l</E/c` BWY;h/s.녦0]:O')J7I0q"w{hDB%tPF/"_oN {m0s>wlv>rr6yѯ,'iogŽy9 v 1%ZHdL/T >r2Q"9P*FVG#$K3J1&*ybKW+EzSx4NҪr+g{_/;=rN(fL%d9aɒWg*଍1['b#Ym&c]O&YL)*SL1H^iW\Q9vM+%:*g5,@J;#m"G>"z5,Aˆl~Q5[\7kYe@PF@fI3LB7ZF%;Aca;q۔]ͻA)$D<\9J@T.H-ҖȄUH\Q:Ǔc"n?߽ˤ+n o, C5CґibJPᆃ tT6q+!Д4X'c}( I2JrA'.\"2rr n%L;S2M3F8Zrm~K~KpͿ5>ZiN@aSupn~XA-',B8nIb?r~h- bCvq&T22JvxAOrhxW azlSK-J8j=!  3{M` W8o=2-swQL2OihC P}WՅmi,oŀ&(fJD6mٻo !N`l>QK<'@}gںjgژ>A%f#I{fCJ%Pf4ڤl+eARQ±iW8Hě(K.fZ9{>My9;p+Ry6b݇`'<  χ6?9 qo|9ې Do8n+jOK6T̰L^*="`g͹v<ߍspPB~Д!m@|vN^Vm5q|vj T`:~i?jZTˏO~_f 5{Z]4;`܂1vPǪ .l]\z¶7-uGjx9z1~+_KX_QFY@oߵ3~j\@YlGMmSd5u* ,Bk%;K"w"b #6RD՚S0I8]RC6z%KRr Gm.QT|.kW1HL451sDQMnpR9& yսKFߩT//#\E#LCb"[M)+ Vʜv]QE M*(ò#">H2>9[=GI>*N[5#g=Qct@-}Q qq+7tɾ'LFW# 6o8=9ii;_67v&xW2|_,rEQ :0 D[RgDUvBqsao%#"* -,:$NsBd 6_b40Q6!HFbGrJ1,,36e nZG/\fxEadjiAmx#V2zéCGI1,xD|gbB%eU' ɊMP)Kget12lr'!(fvm8&10L>'Ui)r#’+]lwlڴ0j; o,d/I$2{u2%o0qJ7@)f MeK&!N !dHhϣB bǘu!@<*QI~V ! b)"ˆ:D^:ftvC; i)PP 1ǘ$O0 0W$npjT,eqy QFI{ԤQVQұYE\j8~%[g]).Fش.R/C+bD5,|ZkeG*-qZ#((PJŧŝ).TjwlP7Li[9#p9tFѦ^9k?GR*"8kgrgLT%Y8k+E!{WH0(7p䊥L-gWJ!;zp+$ʽL]+ +5Y\`Ḑ֒ViKoγ P}8? ~j4M&jqQSZSUQqL1%MU|'aǭL_Nz6pݫ7:rx6I!n4pWn ={:'=?>}֦Ca$x'~:b\( _pk0lH s<@>٪$ԧ<}Z%j*Rt.68&e3ۼƟ{4~tSS+\M 8޹:CIqeZ3+a`V Dw| i+˝D@=~ؼ:Őyjw=Wl;fla{j |%Ω.6Zxb2ol(RAck`j)^Jdu>ȧ̖`%nDfK4;-ʖXvf{jEЍu"B(4'ǤQc]M$)AEt)F+<4ҴPWIKzL?{Fn Ra$ݍq6S}n*\J)J4Q2DS%4 =3_}N%,bpmiU[X #Q,e!ٴmnYs[AYהu_4քX3ťL^;r9 cHڒ6 ]rI,juP 1WYuf1U^C9I$'Y:JJpB!pQژDؙmPoTCb YJ TBt"uNR J-@FOA0Ow4lVDg]W\5.UwբuzcW٪,2 u|=҈pʏ<]}k}vB]]U=üF~iˣEWƁNwp1N W`oFeאiݦiλBڥ:'gFH'~:͗ad:c[kyfQ|Ī%eJE\d[gQOʧZ r.]HyE-zu:>]aMt6=p˾7|nKx8ɶnҨy3s=֣yum9>o^'fq+a.I_+92\M~PA|rƎ(DHh;GB"FVIFΤy~ԁ߮nZ_<* D9 ;մSs_ΖEG\Je߾뷿8;0ˑ.N7 [͆pՋǸqVn]Vތ~I4d_Wd5E%\'i_aY/W3qE+Keo:^BzS +in/ߕBuewl2oVU?PD֮x]RD69P*1&h2)9!D! 6So6,!icp8Z?تdC:2D@h75@Qt8r3ͳǒۭ9q`n7sK;dS)=ٽ)6Ъ!YWF2}ێB _*0K\%R|F/qKt^5di^\jzi DxB=&CӍ"4Ո"jOo׽ Jy@4|_`BD-O (D5e\6fIJ ֺ" jap>'Q)ƹR,Q鬵 *u՗8m(tDx`9Kz͔glwO)&QBoY;m `km^Ky,5̳t30XJRQ,52\:~'d1ƒ],>XS,È'J)eKzc$BPL5Q$׋A M-75e VS@ !EY2ZcSBIҗxGA =j+dU$:|Ki\)dG%jEɉPk12 *)7PfͲGqegi͟7hx|i|1}mс Q k!$+msBA&aM=(0)Քr)ElPjt5|RA DAc2%ځ(t.ŋ}ї8;\MҚy,8eϨ-KBff(0֤,ذF1ר9C*Tk[6"{g7F}#64q@%;CSBqp|H(kwylNG.fvi_LWMhq$[R=Ņ(_cIYjte!,23D~cQ)b,V'GA6trowZI0ʡ<`0N-<9xSh?wFw79ϟͿ&&W&Dʾ)64+ sObvǡb^˖G8UӸgbi jG2ҁR=B<::9>emN}pΉrUxϚbd'K2t)@Y% <D4Y сR%>1G&O{Yw 쎮]w}+VѦW]nS6׾]^ Tםއ[7ܺں=tG6; ]w[6rnW=6"=>ntt{ª'/ekW%jj./tߴ66O锢f^Zm͟6C9n^ ڲ |P_eGirO:)5owE4^"l$dJN'BO0JyK=ˆPruW =.[Fi MvZ*x_/(&>8RSJy^~AǎI 3Ⲗ.t+'0)"T%R@D@C0`Mtt&Ǻwuk]*/mLo k; i_P*0SpK)8]x?big,g;ĢY99@h;8JS}a]ߝHokHE4"1D|@e&Z_ly C`>Z!0b\lf}v)#X6oZD(*DɖBL:'&TJ*QgP$-__G9,-8 7l1Oȿ55]Mg  ͢S:ޤOhWnu.;z&vH} @k79pe,&³:k06ڈW9(Y|>-N/DOHkL6O%!(649[uH*%D! uUr.X4רDA:STF@;Ҫ7qvCFB|VNzKgzHBG kp~M>LE}S̫2MlOt4uj<ʆQ2ld 7=t?^m{Xo/<2YUєEԂv#Hڨ)IuH1s[ k l+T06$!tiQeo!K/3M=o/tDz]̯1SLy>^ϖ;1_8es;+~olRJN-:tMOD}Kj-!^[]MTh3 £ Bˋi|L `QdO -WPÀL2"( tPc䤍%aAKksҐCƣ^2MZwsڝS>ҏSf|Wgz(;ռ&ӋWT-.=D+dv"'51*xyzސ' 9hd0FaE`0@%&܆ !AQyzAJ}yv~ 8N9^",| B~dF 2KIL򉨵&#Cj(<ҷzZtP.m].k/dO| h_[IV^CqST:/qpKBR}ocx8)b[!ЃA7n^ }f]TC, %I ED>ERb[%y)+f-3~kLEdg!Q/$;vpz>2W%^}:2|dF32_7c_pR׈75M҆&٬|E8解Qn:ǩߞj$3z;|KU|0q'~t[G|GkǡMAӤ?^?͎{S*dICmz0)R!~03{c㟷{t!%߫E ~(}~ЖL=`>|Tb>4[6Ykc}l~!#S&aEWMsil9٦9qI~2MmSr87y6doKXot3N0{oineʏc`&}.X?CŸyẉO p=5v۝O,ҜuQabP݃wSyNfd;KyOEϝvcko w3X'UB^xV@eO鞊ߖ1Z$\]jG4 NdgΒ`&t}uq} K<0,11ii/W`x>o0L<<2SȤҢUsQ\2Dy `ARF4m1z<}:&qzsHZ%{Ga3$\]ץ".켭:b[RNZ҇_(}XpЪ+/ЂU[Fẅbcqa *!8r pV_kStz D7 k,iQIR< t1b F/"_K:S}!kr`,wځ2Fh^̓ E4x&<"YSNx(`$Th#RiPȒY]W"9Hc,ȸo5EpI{SKfQ'm׶Ĺ_/@ǣ45{-~S >EP+gBd@Demܞ&YTo G5\*L2Je&5Ņ|"v&t΅r!i'\H影ڨ4^Qt("QKA4V]L(/V݃&Q 5Gָ-&-Jb, @[g. ӫ0\{˻ǐIp],7=7 m.Y0r S Uι:",GCqX(nFtWHq+29"s+1W9H #7*"IJIs ͕4Z1qD DR},*K}s]Fs Gd4\eq9?s*Ky1W\ gGd ј,.#bBw\e)U1W\1f5Aχ\fsgk>9iB0{du@+{D vJX? ~ƣրy/gQTD<sJ 'q.\~sÞ '(I_ >L`_L&yA*BNaR2-3 i(f 9"s6h+2usWh(u&+X|Λ,ebv\=j^ʚuNļGڎ'yZZrF7R?ף07Z8[fl*}r=b+bi4?ӹ&> ^T8])zwvj P%jl.dZ Jp@"P cj4\LyN4 EEcX'4/loY}FN&H00ϗ,<_ |a/kL<_a(yX<_ |a/<_ |a/y0<쑊b7^ 76;+ҽ;agw$t$t$:c k,iQIR< 蘡ʈ5<: -O<ܡ@cۜۻsoo)l|bEϷ O;}T/ާ(d Tt$R**YT9 TހHN;kdD.`#VG#$%NтR /EJؖ55q8 h4M V׎V^5A}a+J aJ1~MEu\*L2Je&5Ņ|"vt΅r!i'\H影ڨ4^Qt("QKg4V]L(/VgV*5M]|9Dn]5iFPdeښ8T{Pazukϳygn19\p :0'.0g 0vuBxh uz+7o=rb0+\ !|hO 6ȣseYw@ރdi8^5;8;c(G ;CTE!I0=t!=RRe:Z"xhw“d@Mm8ʊyBPs"*gh3b@spăN^{|{̽3'QNf^yC+;/lET guHQi}< yš! |ɤ`Q[ ( :/z4X):!Q:*LG_-۷OdD`s#oIVzhQD7 5@qI[8G ̔q<8f-+>h2'eEU.H-ҖȄ&Wt%GE e^bɏRa cQ:X!D(aC< P]EzRAN|Gh# d @tb%SFF$NC؟`g4Ϡ=.q}˗9v*9gw<ӏ/57P7&[b NQ~OjNc|bo˹ÃIg^@ .,ߛpp={4:9(C(&599W1`N&]~>L^KSͯ-֒Dqf%oXOH.MPY=+3,i rC}#Tsֆ~m,~[3?=]T!̔hMx}[}8 J5:qG`TMOnϡ5Q2c}ۛ7l0R~ b6̬u-8?Lop-244#%{j6wYc7*lQmpg6=Y9Lw=iݽ.צgj&Mr?.#w,p|?$FMFN7EnRhTm`8 CStw?_dzo߾;w?~+p甍&y#xb&YBB85 LC5@weme..pU?MLߪIOL*֍,%9ۯbٱDz0؎ <$Hd4с)*WΆkGK/J2$ MH2)JM'{zIi.]w:ogkbO_!ݷ*Snv+w^VVVհ A4F<\fZ&м;P]urP7Zـc΋eu`F?%YN*YN:YN,Ydfp1@V V:0J3IMk8t'1ECTC8fB+f bM݄wDžٿP~`U}^FQ쾖Ͼ5$ۓ 07 `&h?)IhHNƚ2$.Q1o~<9.8[Qp><9TCJXGv [$%}Z2ԣId[IɥUYo9y q)1\]CBwvΎ"0^nKmO')hOrBEd),gdhsFJ Nj \N%S,cH*[I?+^~%ԕb6 񧤵dLy1_H;銝5][-Rҟ}ڪ7: ح{,t^sJlҚ֒?B\&8G۩T! q<#S0Yi2:3+RgLFeϤG;ښc&rtؘ%$zR(&f"28%#+pTג3vvLWLv2ZpN璅KئE3>ÒL>M¼?kи`a?<%5:ܓ"fuQ۰!*fM;bJa Y1\n%2b(h{!ERkF`Q2Y&@6+vKl?Xvgc_;ڼJnZ\ji3+ep&smUHKLGϩXx֝ìq[Ł,d 9d2$kbbdX&a&14|*@Fu2u,;cgg kǼ5"I$R8ghSr9VǾ:{:yyd3r2[G ~Yqs, x!F?PPFw#QwSB3Nߍ ߍ+w呂)^{^X@) 4͠ 74}Eݖ,6h4ZAZ$ pnju.w({+67D^sV ,@eϦR蘌iNͣGs@c6@R9Al@`x)R>&٪S}:څuUC:=%Mm RѺP'1BLs q$52Ǫg=QZk=>3<%hq gӲX4jy;U{!UmK;сC dSFkfH@zz, ^u+c:1`$X]ू]2A[|s2(:s$5/"5 i-vcqSiiBz~ N;&76w-mܸm&]3b\tȨuCͬf46[~leC-4_Pk+-7hfn̻@m9-K3ؽܿbttjrY#Df!HqATZ92#sb5)B׵;cgyJ2q<:.qרT"yGj~تFK@o~zD,9-R@ڳAYP6#fXYU³9|#Q7o'w} {,hZCjM&[L* U\{/7/ch?RPQaO'шd {Qr54&}7FFWaT'#]OF\n*\;WVt;=U\hW]NE\YW. qrR^ݿ4 F7~,cM7pz,)> F5@hlx_ѴnA\v&nc_0[$hoPO(t)RڋG+҄ exnF<]F跚x :M0 _x@Xb m|\NLOb JWA*(] ஠tUP JWAj JWA*(]tUP JWvtUXA*(]tU.VP JWA*(]tTrE e$XlYS-*.pMV&5 @4hAkl}s 8{ސ@<@($w&[Ska ~b:̔2h%!|˴H9GUFV xj/>He2,J04q =uKI(4th,vvΎQKizDQ8釆YBʚKJQ>ņn`ty/tKΒ.`eEUMTy傒ard`AN Ub\)1p˞:_YAWئFc7^.Id`0L6lT@k87#Te}Q0]Pp/0͒+YmݪOArL iuLg,-h2{\^,ɧaPGNF"J:C1\92"g&N Ν SFH >u#}FDZ9/yt^e!Jx,s"ghT>3:IFz:BqǢPj Xky!XeNn\Hq[=A]hyN=?_|;%?SXtpQK7?~{ݻrHo]}9Rr}Zfxq Ϥwd  g7h) gn2I~l2%!x}̟-[͒ZY9.0K%BP*a޶n۲ҰM\Yo#S'CޥO E~\oNDՏR35I8&ga6݂~FXksDީQuwo<ֽYZ.Ϟ>vz5ժb30d}y5Ww Go-+Zzˉغ =]Fʵݬ.״acZD3Xfb'w}M n\v&thu4$FWD9?1?$?f^\r _?۟~y zן 4TؓX@o~Ed8lڮȂF=Aqokm7oz~iOB_loFÑ'Ӌ# AB8 mRl('^tRKLs.;Nt&iz{s,L=DB_)cߛbs΃A8GԪ2a;Bje jqo%(mҸ s U;(u# 8hX^kŲWCWCWW,\<+_As \e IEUlI(I;& $56CZ).dI'RdlҰ3p!k %[=Gkm{CH/7L殱o:cgrP ΞEi ̛a7NhN.c>g8ت=Uy!G*뵶9>Nlfѫ[dtRT5, xtQ숢qfE F%2-H1A[|sHtR;}8Ɣ""Z¬ǤMQįO=l]Khcv ); ڤo}͍+f҅;;9#6_E[SZ/Yi:-DmDl RˆZ.*Nv~Avsp(ܼƣz/.Ƽ˚ߩݽ9'ptzg<߃Qo].Mw ^: 64Vş9zTqkUlsm)A9N؝G6ZV=6ZRbDկUTMR0_<jEL#ZJICzջi.zz@f}ЂBX=]|C dWkrhab,}2 f6 d#*MLyz-6^4x*P2=6}Ds* {SjWޕ6r$Bp> a`{1b xO3hD^%RMRGbFVIY(Z*ffUEdqX/sUkm> >wطu#"yPy0;O\u3^y,]\kn4jDza-F[ig>$9`<2ԴEhZE0 8y J(r"R -t!utu`ZzGz 8 BcM\R+ XA2"`%TIn<g퉳j9 GS{ww4ﳃfztT-;| !fҿ 8Uz tOV E_Zs \y/oe^t/Wwl__㿦s槝RlU~]8҇~'s?H̻ղjqo+ܻ9c&F׽8y2*pܱAlJ6s Ta!k!τT~Z܎}q-8bG 4%M!"qi-Z:'>N ~B"eSkiXp 9|\SIןْ}R7zԇ#NjY`Gm{!zå-X2F .eNI>QF8Fȼ4A$ӄ=h4O+'uW*]tG5Njz8 $G#Uh3kyJ+")((ɐ]tP9.k=pwB_#ТFf6CC$PԶew8|-6xM /\SƄ2FDB*hi4 $R\1c5ZΣ&SOW㻴d">SRē"Hf=OAH{7*)qKv,;PO%x5\T_Ƽ7ZξW?7c-}_ޏ )C!!G1]FrJs2,p<\*<7ywd}/Wƽë<[ 9.lgן(r/-=ˍ|Wq4>UT2jq9KW,"HC}|8(;0<<G.%/1ۀR'L+SS*tb&˾2h07*&sh?̱,*7]5\7Ǘn5rm4nԽ[I0lϯ\lnlFBkBB5-kk!FcCKz@CHNШ-+Y5m ӿ75v ͠R-5@9oU|w˭3oކx%MG:ּF-&,f;1q&Ɯ\ XΆx2X0"tސ]!@CHjKn`TqŘ ddÙ'y3Ӡ\DS_y5ZqNO, (Z8NIAY 8)xcjhE-,Q" rS/r>pJ "#tJ9Чb'7{jOuKc}@(l>>Zą'^pkC)xڕT=<*[^@ 65t* }8:[<@汀IB0 G,K]ZvN:nI i"w{h*B5D:8+e.x6\N,RFZʇn&Ӝ&Nѧ9  7aǼϒ%8}J1U\1iDƔR#OiV%[*3Re-P\Vz0`%I֌I8P%Oe nm9%x-zr<'ڋ]qYcS,ɲZy<}ʌ{(.V-n[ 6(WXCa(% y xI+zr&$;! @6{РԈ@UĀChAe\\d$^1``ҵ @m1HDq][O  M5Ġxj&ӥ^~ӓ0\x˛ ǐsQ.MMdžw`l 0vg50v<+>^5Z<™F~/ceЪ@臣2&Ҕ_xW 7%w9a0&G6~ }Q3-ג#?~eYiV,ɴ"lbƔ/-g‡~8,щ]P*ANB.{0_Kf jx핶AI7ٱFo)~ݨZwgw.Ph>ὼ'Q^9<ͅf ,0^n.>pnwC:iт#xavNǾO)K+Z5hVlgbyҐZ#!5ߒwlT=J iyJ%R<~,I (ny'ZZo8>d:9i-6sҬ%[zQ ˜NuBC?Hk)aGaF @.YUNk;Uw2vJ2$ GHa 0KD`R ! K}'c'$cim*Ztu֡ ܁Ԕݺ;pFЇ"ry2J7q%ML Vh|"A !B@:4.[}xȡႱFB& ڠ ҭ\ue~UE=e(uaB"z/ow#XVvA;@[rτbot}J)]t]ӌbUF23g_%t˲%pU6l\{=nqwY[*<%շHʓAN: e@>@Ó 3%Ђ%wI)NgEOo!F%3ٻ޶r$W|\@ȦӓLY̗ >ֲHbzؾeݎKE^TXeuV.C>L2'sNz6A膙|clyf-dRx{r6#n)S9jc@k2Bz]9;WJ ^k-n8f3"0HׁS,*[I+^~KEģEB)i-iA` x)RSl^x8.#H ŝ>4XcYw'/=.aoh֛hNN꘹UΞm)6FQy@ zE,& JІHc 4*&өQd2 dM, XRY]ɗLk9W3Tؘ8[i4͌{bᲆkp!Q0q)k8p}*{ORIHԌqYJ8/Ŭ)Fd@JR.XQ|f[Z?I%QoBR!hHFDM;ɬ9xئg3b -ڍ͎6oy-]7㝇rŬTGVL}s e:xrY7Yici 9%šK6j"Q)")h|әg3JF]A}Q4E:ɐ,g )px 1%2.`"eIEudKH"4rƴҍQ.ke2 rB57mI&)EMlL͈Uq5[\dlleø([\lqqm@lB$5PD<(!gㅌȵgV? $9sS͎!4H"x'"w8&DpԍP uW}&wUɕRFYF06:E`D" 6jRT=S<0s.n\]vbSRWd!,D yH· i9'#FLa7s[,д 4!\џѦsbH)m P(iF)-@ dTN!H&r۾rRF0FXa l)򑣧ZH?|o'z{ź(wk&f0R63 rIYZu2c;-)(/USwh\=?I_''{woz G0x,(ey.ҵƭf,Ϲ}\i1^tN} $uk9t~wrrz%TE4S@w-Psot  bѭ~=}x.iT̗ΕJwŋiY~>X?xw69}~"6,9{MUq]2tg:z{z-0%+[u{K're33-؄aO4e.>/=lwқ[ݪ`[wrUʕa3i\8_aE(N.0?oV՚J5͉VLwb>>|>?O4*{?H ;4jڛ7M4ـo.&ݛljvkemA7/'r$WWAWHlZ'\N9⬊ZKe\.xjK9/pd57^2T׿C9rw~tIIG/ ֑dT"Ik){D+ ᝖9cHJIy C#=ayUzEÎ_hGZ%)ł~ԤozK,Cp@Pz)mŧ9f3Ft89;=DMבӋTWiU[osy?zK }P;ʖ  m+tV툞m:ȋaN.zsav|~TmuwϞVG}39 Sݧ:?ҙ9?w2ٝj›סع ml!hx+vo!^khanfG3'ʪ@zra:և PyT!W B7v 6J>}sbt% |ERbh].G\s LZI笑9)I9}sƽlӫSޓ|q.ڕ~@m}+`yud rƞ-vV3}&͔dN[E.D)Bf L$x~̸v ۡK2l7f;F%>-G>D,ڄ\g%hf77{LڻG_xڻu2lPl޾._ V0c}SBW]j==p)tEմi8 .ll GJVTr0fj / o|fޢ楒h8\Sz"^Y7TI=8!qwagn~JiN %I ʢd"rFA^USk/쥁{i-AH1{e@~MCV'R>I(|=qyfWv,sKTD3>R~ZH"H2m+@/ 4goTjC|s'yO /1% 0(%!#jpJF\4j1%0YI0 I/1R(J  DMQtl'L w+Խ WX<Fy8_](|Ԧ8C=]guRhFeA$P&E!UE/!d,aExY3$vgm+҄-`n!uhs^tnᷭ~LqĵM$^R6]$]p҄X W[Rp*UР*4ViUp *&%7HD!r0.X%Sfrv85GJs,Gz43#]WB6'N5lNF8fY\ҷth1yͱeGĎ5Ύ`ܭ_H /WB}YtE*k VH; 粥@?ûmw]KǍ% >Dd|2J6>lٜJq!KS<& eWrΐ5(X;KEF&s +މJ੩uل]IzJ8ob>t`lORS:c?Ha}6#5pr POYo`{a\N~&=I0il6E=6Oj@TA& ,x4.ȼ9@^=*^zg/k(:>qQc z/VIf%L|{CBHج9YԤi#*fmHLI:s ЗW61u!3װИ8[ސE)0; cyN{%WDwn]m.;~Z$/[ppHr'JzV*^GKnkHki;*N%ue.Ϧ.oG^I=n̓B"O%Z%uCqKJͭ qmZ;W+=GۥlVC+A%6uX-Q6?ki _,(k م켗AɒNLdƽ^KFd <p49J-y^ ցc"ÄbfT9&`JƩ )BLE[iy ]_Kѭapt9^Y2|L皔~DfsϭVfr`%P$2^Cjn m^+ P&DzaPs _jY=?|0΅m- NFYR yR%\le1rKCd(Ѱa>$6dJ9Fg.D[)2kF/0Z"f$1+ѐVnO'ǷN'֜9Jxځ:fdx7*gq,+$\}F囘:~lFxUUobEҩE#}4=_:'iJ٣%o ?wɣI79: {qxr9mKxGum_9qi_}HF9."llenzu uR~:s[c ?{Ƒ\r~0wclnϻb?l -THʎS>%O Dg]SU]U]PL!hq/u/6GR<ׇ-2d)o09fEp1_tr@Zl&lu֌ uqxnm9b:_>@h3W2am΍uݸflv Ua#;Y\"*s wӔ t<0 ;7ra?n/-aLR>Y7 ̈́1a&j}rW!fC3s"hA 52oStVoeЗQT߀9|cٱJJXN)! h5}R R yT&p `Z׈KE> =_-Zxc.|6=-޷ NSݠ]xYpRIjԶ*P}zR}gsŜ?WgkqTtt \DgP yЄKG_i>mgY]@i}':J# cef@|kTY͘NZavޓ^Ax)U/궉Hq9#L1;ZM[Hq tpO(7& ƵԐ@r$E:`c@,Xlk5QBR)ʣ0u-Wt w:eUIHyOEsb6aKK긱ܹ)!8x- \I, ``1VYE@QAsglrD`p'%1rKR;qjqkgמ|uWu]Sv64Hnō{*u{%n3%S}n m+^)asͨ&4(bu!B#%;Oְ l"dwCPy%%,n6Q4zj4i${oΝQ5>sMG)G2̹h{:o%o]'r%zWNuH8D8r'ZCO#rL "e0DNr$xQ14`Mu(j~(rHRIpxaR[ERcib S"g%xKh*aoYi MVRGRvNĩ2֧F!n1`4 8kXE#[`ˑ~] 0 hc:Ns !) c@^)d"|Q1u-6E[ qQ\{c1 IA`m)QA$b4Tpa<~5J#Ap){ѲMgUNhE2VӋ':Zix8pR|\`n8FWepW9pEzM\#: N}fUkhup1} au{vg+>ޭ( eZL^`};qc{/T{@dB5py2%~b:H* y EGAO}Zږt 2ßn 30+`MM[X'Pm"/|\7*۝`9˱_l9\ `hn)1H-h9I)3~'xXY3 lf*ØH`A 1h4#킹NcB9#;5X"4ވ&yZT_4ёo #bȻ4ȬqjdE3pr!FT i{7#U%u)Σk*goyzoZ|ZbbK64AX2W{3&>NGq0 W4 2D=!.0Fh^H&V*` 6kxL)":fۡyvDLKVSGBQ.vHFRQ&JiiGkCe/Wl W[C SД"ÞSTX k\)*RVk,Pm @Z8ǹ& nȧ7%9FEm6ZNS {1T6\A%st.}zjUOy^ L*SڔZfp0vԩT[L=F/\eߪx]ʖ{pLh ^V\8. u.MϧBY Ri?+0:U:BSV쿿7_7$ûIj%v1 FbP_{L/㿻H0̾C8d=u)vm.fW7PK;$n& ktZsrE:$*jjcBp>ǟ5]ͯsF)S6b }\ L.6GR<ׇ-2d)o09fEp1_tr@Zl&lu֌ uqxnm9b:_>@h3W2amu5EZ rg > l9¶dcNdYONlh)x݅NX"S\ Q(ltbH&nz暱/KNC4wף,XaDͰ(_IK}\PL7~aX9L~[F]-ArBuxkM,(ASD{2`xlm-Hu,2!'ةҌqVJ7ɔt/ 8 .8+bNQ5*M'JSݞCw ;叜/Hll͹d6?e ;VSn3ٟNƦBPZ~Iw|,BO?L_l~yϘ){Kls<~v:ƫ~6[dnޭݾ,%9]۩ {/w&'QϚ=];q隅Cy` 㬏*8okp[of{ctTjuIrnn1RjUwgIK5=~3FLjXC 굹_18h[MX 54P &"a1]cPGm$c%#l( (A)i4QkV!WLGJ_^nhC  $y} fP¢O큄fZjrQ#"c>]?ou( CԜ-k}Pzs[D WѠ]xꈛBL^M -&abNx7]`ބ~/rn@6q9#L1;ZM[Կo%cv:V@h;7Xö[CΔf HA1׺r!ɑ-s!(\H6Ș uNp(%X+"CB(Qq, QT'T#r;0|YB7zf\5aa1r6( @+m#GJ_vxE^fv3؝ bl`(z%I_[%YjrA$M6Yd_;Y[YpCυozqS_ƶcƮ:V]g«BR[;ϓy^gj&4\a8 kJm/T?_IN;78M8=| (ۃfD?:>d?#'?x<.͇QY{(2L&k*RT&KyU 9 dp$|(13bD⽕[7qQFCu{Fd,(j=#n#M%Y""2');\(:57B#c#Y}vO|{P &5A:ԁZPھ Ğx? J\i0)Dr9I)U% ߩC.Ŝ&^w& kt;7k}.#K0\PXbPGomTDɨh:s_6ENZ@F %K3e8dEπX|A1z'Bc8wM6\  !2 l^vvǿ3-b}݁owk[k+3!-Q?("m(6ЖvH) `\ L #As:uںsߺ8nD @$ .᪎D)KE(1A06v w.t">\Jr-jG Nz烌Nh@y2>1\ʹ~ZE\J8.p $ϧM$B Sk$?L+Mxj;PzklGȡo2 =\.PGkP\ࣃ1GY`,c\6PVoR$39#sGA19G4;)uH0$0D1H)J*fT+f1v=7LܞmkϲҢ U/BJ,"%.`XaP'C!9܂l<|U (Z6LW,7d,JDl nP(FB S{0:Qg~QǂPdiQ|TF*mQ Y1q#=Oy ͰL#}9j8!& 9ЊséoOdקзlQBr%{Pꐷzr%3tAnyy^E0d)Us 3PUQ}ݽ3?;[Fn=*f*&"ǃ: ;m5,AOq%ca)x|| ПjuZV*_wUr]`p1Koz^]H1~N*̷ۇkAHSO4$a݅7lO8y)>o=^ٻM6 6^liTj\IK\>Bnz:b2^SZ k Q%74pMn7л'dE?O7wo~xN9o?{'Οp& t A$z@W]/ڶ57];Q&|~hs ~},[[ 8n 7co\^3W7eyb!_y\+H6n:po\YUObuXaxGAuǪ& C.BwhI}Gn\OXDF_D_AӘ/_~82KjK'SitBdVw~O?9)8[q: ?9'e8!]ɌC9 7'%I)NE$ILZ3 L kScp[*Pd)1Z(OJge`9;j7cJ-B .r<QoG=Xb2ZQ(E@x"5MJZ TeTS"\Nіe}dd8[M]i@p }”k8/\P6fdܐ'.dn7ߎ,ɯ+~Ɠoc XN=b*! lN/&"AB(edA%.憧=|8l+JDlĮ-'&&,R@6mFEv퇃xG<ؗkӖ6v`7xDUDf2Rmi4AtФZI*7 RRtࠑD (X3A!pBIFP\5rJΌX8bkc_Z戬Gh}4Y-RP>>d+#W'?xV=.Q)\ZEE")˽jUp`=T}'g}3#@|Pqd\lU]5VyKЬB`__꟧Y@f./Wʏͦ.3A=|Pr|qZp>@@ڗJ] (X/Kap0wv><4FRA+f蠌q>0TkS˕*Q0/ZkYӠBmnv*UYk8;f4=;wo=%ITMsWf|d2kFK]?.ڛ4%q_oͪAYb͹1nޯZ^VܼF楛wY;ylY[{wbQoGXϫ~}X\6tgʚ_;.@TD<$:DL.U|2x/0O\U&XQW\E]ej wuTὺzJ  =uɥPUu TWZh+-2(L.?u{2+v+]q5Bܰr>)^8ݢo`}')wd<_ 8g8В!O9f4 wwi=$5 6`4<5j:Sz5|4lpL(ß qFq:/^>*{:ًa7IC8 +}H5lɥYC78؍uc\H\~.ƣؖqۏK#k!* _Lqڳ*w+ /.&rIUõϔ@;ozx VjU'tɭҙ%XK;>;Cv-/0AV-8ɕIddilH%#<\YfUFZ|Gu8.zGj;R-l_M?Mjf'BmJvZ=yPe'OTٍ\ĉP;Q Rr71\q2t(+" ]B.4'tQ}F (8+Hw;yu#^!'|V1seS#>ǣ )\˨ 'AR-P^s i.޵c[7;2iS1oL*nulŵn3DΜff#(jAPpeg0"O*ڄ\d RZq_=P BQ't: Cr+u^+yБ۠ H p.8HeX@r-j*e+1$˻IēD3⩣FI91%ĺ.j9[+9`OyͿ 6G*K]\֛/M#[M71DUEiwM~oVfF  ǿVcϛd|_a6 ` (n ߟa47oGÏ('kK?Dm IS)[M 1%?$lUnn+^skx[֩-ҵPt=4ZW o݇IfG2<,%8& K^v's[E09j.?~6<ƃT% ZȉDXihF@EoTrLU衺Uiȕp :@c0v>')BtD͓u2ߟ^`Fn(kzDM0GTό(DPQ!';\Jh^;e`ajZGC6n#j'knn'p^hd{";PYn67\SƸԚGB2(5΂$R&L!A'!DERwWmp R.E&!}lh $Ip6|H!9EVs$OT ԥW1!!BWN+(T*dǟlw2qZ ǜ")\"ߌO6d]1.M&E΢ωMۊw/q|T4_/7dm&t8{|ɀk6m|pO V}Ot0t59WdlkZIN=5G՝8~Nxuk,8O?_BoхVob~y|6_÷Gٮ 3ݦ%Շآ >VEf;ҹm .*׭m㙶dfx~}㒐hȄmm3=Ɨ+Aۃ?G'L(j|)sjϒH~l1l^.:.ƣb` CLlVjJ@OPcn@R?OHH!Tǰq`uԵmp̎|C 5UArY,:^6Mw/^s<_~2b7ǟGK~X|u9l$`;8lۭ|hc!-0My&f& 2vǵxs ?5jT{(njBoQc|ަŬ54s~ݲyۀJH7E 5FKgYlOZcU,Fe\]i(Qk' YY ]*X{'G= QDI^/@<< Ww;-0ZJZ2R8t>MJۈJs"^'J/sOe,Xo\ ֒5W޸z~QEv Cpu>ʘͰ20s#y Gg%X|i cqc BWreF{3zohz|ý7 am\N{MPI-4XãыkG!?t|{c+H{ ]ūMm~ۢz&/` O ;}T/ާɄV*: dlGމv)׬=$pez')QEE>IҞ3`1H&%cAGk$[O hO(76H2%)Ĩ5t:,hac`ȹ۞5l~"3{J1S2Drҗrs۳povQ]|~hz+M1sm=mf P);@k&2G^|hm2"YF1b$Wj6#V;eL  ~޲9[F%ւ'bںCRI4Ri ,d33&Co_p[   B9V טi3 RQ^5S9~QV)W -7ŀ;#nrU1Fz'Sh$%5# ) XUH… ՂS9W=I9|ֲ3'5 fm`P|p dzeqp|<~WZp8eDJI)K_T N^e'EIVjU'tɭҙ%X]^Cyo.qss>W:Mћ5nLZG ҷdAmcnQZ8yscq$LUZj\Wv_4ΚgkTv2*1o59k98>ϐY^E]`aU: p?5kUӑUbu|\}&ÈmehYwS3pc)Uq_8;F{eCN߄&ik|iAM"}ǿۋ"Wu5J6Q撶aɡ5"#5>X@sc`OaF!Wsjwo646Oj )ꕓ lHU7xZ04n=Àq(E9-1uo|IJW?f%]+ 3AebU-K[#n~ǣ&Txpp{ ߌ|TO萃S}ƿ׼ӿMtxxjTIrx}YzԉY.+Z'Nhh4[i&lQ>I{ռa@m s?y?>p7:9R G_.S"h:v×e5Mxb$ΖU×Wcj[,vQF-Oz:gUskluF.uY[Av3 NGXEx5j2hr/TjNnW~B/~|wp{_}=8l\Be"o%-WO~]WUS}Ӫ 8<\5e{8~Sofq6H/WI̔b7ӳYQz\G W-g+QfRdz41a]j4~LFxU;Y/g+mK0lLlaEVƥo6Lo=-mVCJNH`LY,: ]VCW1"}cѥBIRMqt/Kvq4Xi"PtQ2:]sku+ u,# s-[̻v_TU<-{{2}J@K2jdrp+<MQs$f(S /K6gH?BTh`w:QQ2x eQe+wT!xP, %%0T [OyxEcœYJs}zW ]!6&G^^\7KݤZ^HeÄAZZd{g~4%.SFod'%YȊ\9dihdc\nTLSQ[S$UNc!3)//Ko0i(4@IIIu;gԏ*FƆpu tLҶ˫b/6?$8k|=Οn5"Ci.xQBe2U.sP:#ѕ8k09+Zѱֆ6ނ]5❗-ɬTGVL]9 e:ATx,}bu0+m| V$L!HBQ!]Dn& C*HyX6~RhǦw^#5xBN!2K:X&˨Sp8^Ip؋C{U LK9U>2ˣҙG5%1. MѼ/&4呗/~+Uӥ__\o'.#l@ N\'%NMl0E#f$+T#).T7/ ۽Oc8/t1YBY^#@+Iхto mm/v`O:y-tofWjgnuX:|ebV;5{E /jzl%vx|կ?)=mxЫh4Vףd~Ĭ_/oWoj#VZf/}m:R(ˋA`%jI μ?,(YGkWjM x͖/wr@n[ltАCD&C9d!Fr-f6eV'Y-rf0Yo V7+I,* V\1?OxzUuɍu,{wpquVd_Cᤩ+䘧f3l@y&/ꁄs-r)#˜C(Q^7fT`ȪDN7EsN^dJlnðGg3p4ӑxJ.KU0.L9nGmX8w'`f4:ۛ7 +p}>C$OX=aYS[9G$Sʘj!B9$ yZlK23f@#V9h4ޠ!&1I0KN{*2#:* Y 5XHJL8hϝ>@ApZ" PwG",XԆVj[TwwTFYq222WUx0w;#pgz&=QqPir>p!bLQZkF%UKLao93&Nڑl2&ȘD*!.h< ӆ12MNh:#3/9>cbw)d+swO{ )lEEwwFȕG6ChuѥS;?b-uNOtt=yzuԝΞY :սHOŮ{%Wc憹R^?\|#tuL[B^Ap_k`oT#m/⩘S3='czR*TcaP9 %jwJD |rH.yo:`d:$ ęsrGJp<n@(x|3q/Fvi[T4]ښ4a齇Ϟnһd́J"FGs6*+M<9B֑dP,r*6% ON&ok} sQۚH-f<Pk {+t}7_ ({|s5>tx:<`-䵋klԚ%r<~:.wO|s;s}g|1n͘63LF,Dr@ˇ,Ls9)P 7V9N`! en2s4qR zPr pqc}pMwrs+o.Hҏ5-)#K6ŧs/,bV~^>rbk hљ w<ɜU NixCYiYfTN!mphw9ZN8z;w2Mn4rcQˍ& lTY`hFZ2Kc3h'Gΰ'd]oGA;YX*rJΤrIYXu"<\2øO42gsIr"9`"8/89ƔQNg29 '&bs *s.mD_/`4&`R fBf=5 FvIRc8?Rї:>;~ߓ^K9|1(C-8Wpf?Szg;9qzOaZaI?rP9޽y[ayUSlCze V7AUdD0sKVK`l0H;gc{~<Ƴ0/I p #c<6XebULtM֏GͶ"60Y߿CS}H'rW[^./?{ƍ}ٶy)=. G4i|[~HJ3nYd_]X$M|x[ٰ?ITB,u}ƹz}fK̟/5okuvU{U`v10$l^`4t=mǼ4؍=M݈XՅ1lLq+h,]|<:MËt^bz4V YIbN (+Z>_aiz<("GQ4iMkp4=-ǿ(z_P>k.Wюirl ݛ_E ೮O_е5]YPa ߣ_5lj~[eve )tQ~8<(zDbA[.;w>lav+jKXl/5i9 s8 ;(t# xhXϠpaFЧ:O}A0',R8+ f*[( dg /s!KUl|,nB۹ \gd {twS]^l-{A6GΖfIWktLzu&QRsٺCٺI$[XĮ֓e d BFɐ,8I(sBOT?!Vy;SPѱAk BbKtx*TWe8Q"{8ä]-PWD1 ѧL1I{3j5rv3jzsբxoeɻ:9佚:ɻ;%2pJŭ*<78@X`+B1(KV 6LpEJ`Ө NǪQ[cDPYrA'EL. e&EˉgR%c5rvKzX,2Z Y8oǒk5oH0gJ N''@χN@E&W)2)fMqB JcEJlie. B"66!hFH];ɬ\s)F++y#j[b(]VǡRWڼ=ݶ❇L3+%z꜉tn Gs*KϺ?ShVjIj)0, tPo^;#J.j7%J5 6lK^jhl}(6a[ |0unf[ pz6h nVjy  >d!x{G rXz SdET%!&e4NЙ |Tl#Ȭ/;7X t-{ӫ|I7O+g6Զ/6Ů)NVb)jl&'2f'PsR0>H186XwKG;6aecxh{X\ǫ͈^RFSJfzuy+(6.H0&rT8AkW Wl|K{b{+!H^~$^a r'r BĸYd@7 Elxb::I`CM5g.wR6y70c+CY \ߓt= M9mʦW?{7ߏ6 wܝs @OMD|^k-Ԋlv$XFא֭T9ve$:&S'܇X2r͛VTNP5Cnk *Q"ڸ/:ǀ|$|M(F%\FxEx:g h1tc6'u`oc-a87Y7kӶ{@PO[ 9N65yVTpNi )>.\ dQ}iu.-PZ6֍lNFTh$X]*dP[Ag&s#sOV?-[Rv&%җ9m}fηMwwȭM6K_Ko9A192=t&;]>0lͶAz'ov~ûG+ʓx ͘Y{%Ycy;^xp+RQk4_Vڐ2uV/ NUj Ů݁o{-"/<1爛Z-8䷗5b=Wy4 M`T΢ !8k8oMv7)Ry5("`Y6׌BJk$U>095?:$Hڹ_\^ ͉|`y6fhMӳ3+N'̿ &bTT>(Ĝ.ɼCvsW[6f@ZAuͲ$n 'ӫ/w9H;ڶ]j8\$NW!BWS!ETH  :ϼOJD#H4@V$<+伎چRl7Er*McpRP'tB )kAH <HUvJ̵X"gL>3ܥJ^\.xFWooWc~Vݩa 0v6u3󋬂>^8FVG5{WpzB}0k480VWF+a2W܏RcE56L|s"Fv ҴIy9}<л>4wH\9L #*R䬔>5%<*lkRC-F﹍28铌Ini#u߮&~fwk7%W4tzq:mXLNOsZ.e%OڛD XKcL.Q}3GuOWfgp}&jvubm,Em?7=Mq?o כ=澚+|S71$5#KmItLP 5*S^lM UG 0! L⠊g%E΄L(HdyiW!%,H!3҈HI8S)4Y"Z!++fBɐTq0K)q~{'ww%Hlzgy.YHW(FH@EHX+gyLʒdOcQ2X@dxTI$3C4X]ջ9{e> =tSCԞ)i}砍&MVKNzm6*RlR++.,MuXV&敱 ȹIįVb.<ʬ&{m)F">8E|.fq=u !.H(C Ǽp PJTQIE_ FZ=uǮZW9P u绫J{͇D K ˬR )߹)" -ǰ<6 iG_}} lXv}}۾?ϯY}ͶGZVin_h &Ro5 4KUȄmjB!d$=>l:[¨FkmnVٙmdΞ4sζ٦d2jd%;N)%[Xً xWa)XHmaNdK͐#ɜGqBJ\cui@`vsuMtn^;=.^@S-*s##A WG(+7%ax8y Y.H y$RT<0I'Ru.ߥSE [4+ f8I'Skt\y)PMF+a\xʰZZ )$[ :]be(2 fg;qyeP0{eIGJ1N:*duS{ˤL.e9Va8V9#|b^$ct>IĵNCr!g(q4NH3-sMؚeH)$9H btLn} %d]9{]+{c3I)hFhƩ\x#2a$a2P"욕VD #<#[+C!S d a_vj1N%0g*˵C K.T ~O 8F177Ç**Agw7,D>hj?.۴Nm^[0p%9 VFj9eL!5jf¡aEp2 e%xFɹg#t1gR9[)  sĬ<CH! ϐPV6w#7Vy-"`~5\H8?Vds )õ<5}4>6ein?! N#  CxhuQ LrDŽ0ʺI`h!pRA THRQG(h|wtd*{¹F}Ahss ð8,O1#˱lBם ]O緬ǫJMJdZ2+3oF*ɡ}v%/K KGr'J!JZ1|(FJ1~./æ˳p$ߠ$.|jO{+mFnwᬗ]+({˞Xm,^^nO!fhc=x"? :lZ.1EsFwH$IJbveK." ozyiUۤ!^i-PtFX s۹כ"bƶ x\wl*[vvͯ\ [pS\Λּ [37_)7T<\=\̽ɹ;\{7x[syNBrsPRk"g  $]hudSɨHq5Ӧҟ;pyFT^2@,z< "9B,fyXqT+ǽ֡H GY34`DػE>O]\^Nƀ]_Sh8Yàp_T,YXEރHx ;Fd"s lpmʯAmpef-ҋYunrn갋EaXu4ܛK-<.՛^#c ~Z?R^} RnHMw?\ %n&I|H~yukn¯̌b]iƳqp&?X܃Wu/\2?ֽm[S\t⦡n@)QMlޫ!fO*??7\\I`R>0ٽ C'c[Փ1c`EyGa.3j~fbA1󒹡Pw*;"0n[D10:4nIh6i['v;ޘ 1w ]ڍ6K⽱0%#l=Q ƽRܨZ e |҂nK _m8&'@<rbPrXbrF匹uFQKe<@#^,?KĞwڃd-zQyIԋ8÷˔>6   O1sPaR.9gq.Śj>8?TW{ OMfCQ ~5~p?~xV7cz*ׯ-v`?|a?CDX@$wEv%8sw!Ngx% ~֛Ȅn] .l'ՠ9 (2BY31ZiJ  )(<^Zτ3E1)cnDzdvh/o5W8Ҽ־$g~R[ ֟T/Np6wzW4g6>) eǟgUQJ6F~?Qhぇ$}acP4+K@jB!'1z8e+o\Myȶ6eEfvO~ mmC5flF~I\>̈́-RjX ,\ڢZ] )ԔVX&&VU+[3hmLWD]=A #k MqLtP*+mTeHZJ&:zt%D`ўWf0WWRN]=EX)ɺÃuxmq(ԄNY⍑;<(b A^9=O(Ӄ;=xx\>¨h2n&.hT h(i'՞TS$h]`A۳+孙hOJPGWO4EU{NrPtivEZ"^]JP]=Eb3N03g3*5oK_oF'"P~|2Ke>!gg"_@7@P_'z"y:Z߲J ݦ 1M/) pvaλ;ͣYX W/ΥT>omZmʖ|.GbR']urkJlӭZ-U~Qy#M\4s| gm~ )}Uo]WPp-b)Yz Ω[fv)G]Ʋ3-c'yE/$xe)[<` bPB$>.7NeO\'Q8s_$NH]qAw1Pȑ_E՝U_oŇbt]4?a+vL2~j /+Lh%p\f0R=ؔb\.nw6y$+3הU6AMrp?Cf8 C΢r9.&N簠lC܋>:2drY{=3"d= VFsh!ɰlvǃSe?ȑ7%ЦQJrV'ØSzƑѺqVP>ՁJR~Ų/4qВ>7:T_hXUWRtG+.PHbhaLЦsSFXp4wSgm@qnx'-}z Zv, 0;fx.?W[SmwR^|?R^} p6m*) >ix(rwn{ d޴wQ/mw_ r?XY M]3I H&L(elF2 !m0XXsZkeQ”ωB*!0.3j̘$GicP2`4 *n^#aEˡZnvEʍz9ok; :8^+e ǟ@\?{W8O`i{W$X \>2XT˒Ggw6ODʒ-JEٲٝLMꪧl`W8lK԰fwt]w=+`#=ȭCMUwXK70=aߐ }CE+j=ҺZ==tG6]gcx?yԲU-w>=z^i]&{GC-kYwDl? n]/.7Mw谹[jS.=.ho-ϩs?܉3n sRVk-i\zNXG{˅W\;aȭ.|mJc2…g 4іք|oEZ}@KUIK)9!N =ByYnNZ),Q|*ib\L6F%LNˠ8gGmn9*KbT6ˣ`,pEkж3zNCAc g[N6q2I|rR{ }=N$ҥq~'o_N+NFpdP,lР4DF5 B 蒖 +o'[֏w3w 7j.fo :r^w.صD ȯ3V}?ZeKQ"VuEeJE(3vi) K٪!Re5c \ߡL69{l%qWB@cFՇdDB$O9O+O(I$/K* LIθ cZ p !"r;IN^d$%ȝ*e I%LJ$h(H>l8{&L<&~Krz{ws}Njw$ܽlvzQLNsTFgфIҠM$ؤFZX?ojeFu@WE @}=HHzH։'NX0ۺ27 Yr@k=Vb\ǽv2ͪߢ<1m"}azd:T(6#fGS Mީ@5\0 :vJ:f d w^(#e1(UbY Zj[ J $vO}h]EBl_3lC[epK0_1r4gl'?8OĿOsP|내ֻ'z}o[Įpqٷp6Xl{ i! i5NօJY;6I:) cTJ1]X-d%=y^ޖE:CRV:4>p<C$T•e:LNh*h0獇; }Έ],sr†Jcj&-$c䕵B+]<*^tY['7w3w@#>4UbG@(DH4dzrZFb)8ÒI|fJX;|R:p@mhcxs2MIfMIcnOm u-j B e2h)& @+ S7(;GEk1>%{Y CN%H< Yʀ{g}CʆGdW8!JRxr438jScUpdղ7<dv{jzz,ORc '1sov2@)Ly+uQ1-,Ԇ!3銼`XQMPyb8r{YaLxrCNpN&maDhAY ' -v'Ip2J?WuhѢ.CC,PԵgwwAv0EFOtn샛q.2*^XX~pRG0Ii@YҒ' 2J,˽o0^ƾ}#Wt|?vʿqƥwbZO.oG9~|F_lу<,pd!ۦO;䆮&"ˁȄ:RJe\21OHL+zA[z)uɻ'O'x-3BYtXlҏ K(/U2>9~uzO/FP9O U<&gYQYX|鿈Ƈ(y@PL'^'FH`ȗZstq(P'6krye$ס)eM$mלnw/_<nuZz{t]U(kZDT96HlS!)R"eJ[Ch*:~JP,=>8V IZ-sPWvNR"? ,4"!)B2R N;=WUnIʺQ EPF"ok")h@ L& 6f d dsy/o"zך)99 \;Igw(9ƴx3I8ȼ"uH/If񜃗l+A>VU:Z}Uђ~$?G>0\^Ek(h<%`R8{h|)*$y1dצ7Qtۋ_YW9؄%7S=P)4"'d^Qe %Lat'`a}v>Yy{ZZ?%+ /e0B"9sepY+Ջv]ZD z ЪZYtEW,(oCe14,MY|8z(w՛߾|} Lo_hIp0) ~ {Wآh]WPެh0ZW AY>,-ٞO}ˁ7v4T6Z3fǙ=S2^owT$gA|A!OEAA /bV j _.ۚV6~ZDr6u!BG#sKC&hA'9OB6̺9fv6^lQ^^_cq`U!V,P@ 7ra0( JxqݞN-{::n]Lxܚz[#g쳳xBعSZչm2in;N\]O87}hsw|\ȉR4gKr ,ӝI.庳w͞:C鈝#{`h6Jv(jҞv'QVeL@0j<`udXD% -J)\"IS]jґ>jJV {jܯ橽R;jÓ`HTUۡā苙`' dunb 9d̕LMt9"봟 y㜳EE#νr 1ȭR6HYZ!qSj QĠI"U滋 x$ )[X1cxe4zl5ͭl(e7(}^eꃐOU+o ½F3!P`)3!:UaR0#ɧJb*$ e]QQO<$4ƔJy5Y?PÇQŧ\K ߌRh뛦,y&'m0:n{e*{YrmrMELWIJ-&nCւqԌŐ2G$LJE0 GʌB}@ұ}飶ě4"32xӉ̙@Ru3cmp3c}Jm\XdȅZ\kZڎ Ȃ4_3&xEbjf쪸7~yCn~뇏ʌNSlAGA y"m8R2mBdJioF2Tј&cϤJmR!`V0/PHHLRTvލ zƶÁ_21ڵIǮkfmܱv>&ڰ-+y8Ʒk:F, 1KV>\H☁ !f`E;$p @8t #ȁQ;jmpaa2$bl #&2"ILj#>BWh1Pؚeb(Vۜhp( w(g}s&2J8&: 8P2GAsͩ` )(WS=ЕsMxۈ;sorMS N ' 'sYڲ.6:^y>jgӛ\ՕpavN'*YZہ"b(Rk(!^+k)L@yM>jb w MX5ƥjgCfs%}Z2gK13aeh&Lz&1/9<10c~ n8§R7nO/Nʌ'ofv)z磛! ?&G(ؾ?~~h%zZ3UqOKiREL R) gs ܶ{[ɳJq/ Hs+YNY޻!+O3f8͡bbV3sדM 05tJbqC*$#+*n]%LmUBXGWGHWLj"d `JCW $mVdttutœs$i]qEi1 M󎮎o1hR;D:DNWB }U#P)pP _} n:N)gP a-޳~ 'ٟ~~7 GgYmўߟoⰘyǷKn /esxu'3(s@}?f}Zv?m@mg_p^ENcS@1|NƄu\G'08c`bҹVCݗXSr:E){ɷ|4UZ&`;T6ːS[r3vzLԼEtSSmCOWR#c"Jj ]\qh ۡtP s"UKd[*tPJ1ҕˆ6ѕ%5thëW!/q*kAJ=Yyϳs/T[,f.w'{Y̽+jU$Yşo~veYuu]\|,0ЙU?\_BF'F)qw~UQ-0ĸ@̵q͊%+J4UJ}`%8SCsoH_`wxo\57WR]Hω;+kO */G0UO,k(l~q3lPy@$ 7{fY1)8@I OgC1кF11ʌ ; qa'0hkB`S2j0-9  hJ,%&S&@Vi+mȮЗ1Cu-6Fx2 |z`lj 9tPnv?K,󺽉"ϫuTZ}ƥ/-@񤽶jkT 96$& Ozʓ -U0rs`J"s VAfmYdB BQOudV􄼝0Q9 _&THdwr)K (<ԡti~K}PR6J2Tb7%,dp/M^A8ؚM]`1גuV6`Ņ%4yu d,Hw0`TyN$yc#6xE0P6Pc. ٻޠ(x lj);[]\vAjc4)@F&+ʄv$ ŨQAQi+d UlAM5cd$`ΫmO J+Y1v|э!ՠPw^KCp2P(SPEP C='X !a@YPѮiH&OQUͫd RLAtj ̈́b :TjMPI!0x7<|Ȩ nPf ɦI{X +T4gqJAQ"Y^3vEUP4k˒ (Ez@?!}TPcݺ8wRQUuAT"z_|fT,F^$vDRٙN8(l,@'@Hu{PH&C@U+c{M-dPgF\0W<Fw R5xldb̠JUYa8iB0`ǀ9;1²^9K\7lŏ*o#?8;u&B6SL`-$t>gAuP<>o:@C6&*fߦ-6*d`fPDdn.i֡ Ao%CJ$JET2R,zGpKR@W |oEUv!+b FOBfȈh2 A]KrAy1k"! 5y]%@_!8(#T&5|3 E#`Y+5Qv5+ڱ!," !PVvw%m1Y5H=V^:Lj 3`IVJ'ۇH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 ttN 8҉GDq@k9Ϛ ({N G['8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qf$'j*q9.Æqsʐ tN|3 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@GBIHNhPi\gqmd tN M8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zoiFեmq򒗚]w ޮ7uu{y&z kC.QKӡֱ[b\zKO@tŀ8tp6*st(:B *%3-V 8ϼϟjU1,UqqIeI.mސvgL(̳놽ܳ2 yu^hzg^4tCC7$ @ <󞋮7MOWJҕ1hp ZoG+FҡPҡ=z&8]q>z$b:][BWCWdM# 8aa殈Qc+ǻqF+F2M+o;2ORd*dpG]1D ~Q tuty+6 CW CW:]1$tutxCxҸVgʟxYVs>iʎ8,pYnh>efGҁcwɽko.W7~wk_Kǧ39zs1|}%po~l=nofk1|cVZL7K-*edISE,zu[1rrU Tɥ7;ͧD/o.F/x iRKkmYRN~r|mZ5DzHxin-iIUeq%x3Y2%w~ȍDLZ*gz%{y+N{5ToOd+{Ι矀åaѺpt(}:B#e)DWKMxd(tute=e?ɺaRmNW;u`KZ =S --`%x]8uJ  ן.08;~Q[c흡6Hh'fv>g w/F2vE\8R pC4H5M'bt*HeyR-0NF*St[.T}G[ir6:wv MO@ƨH3luF7Sl7nZuPFE7ntק*`y~s/OnKW{j?t`t+-t]n$`g0t(tW:]:J2)+vup]mNW@:B"e]@)Ͻr/|16#+w4]pP+)ՇNWҕGOn `0tpm:xuFk#+~G`Ovb|hOe84'ՓU(@tOq\tpmwNW2ʓcdTH#G+K<deuutEx{Gh:ө7F9ﴗq߁BN}ŭnn,m ?q sP~ѻGWR>jϞ&Jt8d7:v[ٕsټ޻qe~s©_ˍ&o/kmFp]|i+5ƂevCoz9zY^rQ]3 _$+[s~j_򇱒2\B0 MoR$I.7yQS=PCFp*Jr>w&W0Ʊbw㷙^$an8 }=jt=USoqWf~M۪"X7ͫ A1zD:p$DTQIde%5,{S,:'rpe^oW6[[ 1A˕PDeq-@Z9&U(4LjME)rqqs.AE`jʔ-b>39)Z{oj'3O!GeKpސjlhhAWӝz:}Og.l^E7S2.)85X/\kN%WuWvpaZVjtW̫|TMuE Je b87 H B܂9\%^E^CwVk5X&ejLkhô:iCX!5RYPlqNJq?¨t2Oº"R BqG#Hc#*5:&n3P? gO4#HNsk9!T:yh:(uK*x[.5!/I 4#7[RFebL$ R">ĥON[(y&IY=LѠ<.F.2/p@xϳEt_?:N/Kpl\NJ)][eo;e\S5^ny378ɞU߯#Dj͙J06BD~8F,HtAc Fs{Iue ~x-/0EnQ@Jz!ACJfx6`TX|1vb7:wKW)D5m~k4Qߛ^\3jћnأ 67n|F[ ׉3\>3U}g?MN]sk~Cyz!Fi\OQ>b\#xUV⣏;ѷM-^- MͰeFya4`żG@6ͭ2Y'7T6VKWGnX"m<W%tƩUFvi'_VZmT!Ol0_w Ϩ>|ϟo_}w?|x?8S(t7IAXhGӶYh8n,z ֧vV ̗J8ӫQnP1>}=bŊjg_AQ/owMp,}ހ(zd36Ԫe XZ/}>ZEHm$ MNᑛhsVY !2@IhΆ+*|f%:o~u2]T.Tuey\nNS7n{ġUdyCE +7 ZP4(;ME =VtJ^]>}u{_?Cka;lϰ%壂-&eSPns.#yBN҉`UpyV^?.?'&<#ekʞ!{hO7^D_gtHC yp֍^tb DC`./(H6i #:H:2J8m!^ :$m$J") @=͙Bbk\X5rvZ`A]Dμn (qVF🵷seqe+ݠ!18u\e%]Al6y,&sPl#蹻Vg"x>۠{{z?r3ݏJsdy=ΞӃS$NmYjkQ ĥakΔŧOA-kK< 0:;+ KX{$Y),:$$c1z %3.ߛ&lt";t'85(Ĵ@Ʉ8xC-@rR!JDy%,wVz=S-۱V}D;۵>*=n;ݸh+SPLFK Of`IR|\+b)jJkUPAmYDI6Lϒb̧C!A$7FPeZ5,[Pr:c١kOܡ*fC;ӷq<9*w;jǸּ)V JAb Ih\έ' Z&O*)Ƅʘ8ez `z]$h>Dp Tۚ5rvkJk58P.[hm¦.|,]8(mzqo=I|~3Q,o0.Pp|3ӯ\c XN=b*! ,adNIlJ%rI)ƞ %3βJ%Q3iˉ ªMۛm[c(.Ǣ[C6mYkӓ>!moQ$(f).0*%o8 LRiTȐ &A #Ssd$jε[#g>|d2hPZֈOqkxfMCNx|jJ"PϘ@Ce"2߮F| mcS`(PTs0!%$w[#gF9jz,:&_gk\r^l4'xҋ8^u$,@^214ixF#D(B@<ЋǢ[ChYT<*l}]m9j++oH:#LBaZ~L'PEN(i!N2 ^B8 oq4ѕV\Sc=>0g{J=Rt77Os5@L#֋R%ĻP\NƟno^gςG x6?&LGp A傩DU .g*6@s؍A^!0 +15<(WNKQߌVڼCs,6'[1h\(GuhR<&<2bV:.yh\ 6x&y|;-% ОLg2aDK,h`SFEL1Vz#0獿7T^S|wKw?Bi:S,T*qY6rw=G_t}]94śan蹣yPKnzv$XZFW֥ Riy4HumT\xT_X/]ڥ(T>@2D'R9G!"tJTģP%FO88k#}+/5i;sMǂa[Fy{'LP*#?sԾPB@zY 1 {ty9"Ǒd\o Z1@e󁁧:XC>XUZyZNystLz&6v3{ evG*.JZ9PP\ Wk=7e!e92.t\KosTt^f}4 d%g|v~Qʛ\{ԼTr;?L-n7}އ "M;*WL,\97we2fqUW6ۜRe 2)O|/ Q-w ϙ+%LO`e\2Ie /0S y GLcNҾ1*H\.g2Z3854J!!2&D稉[2<39m?xFo9Fb;/%CByb"H8XSe:dԩMSڤWN26ʱI~Y{%>ᇱ눹b=OnIb*I,EJ,"_H|RZ^ėZz]ÇiR^b&$c"4. .Xi1K_.nSmgJ1q>u<shʐ ^cLˁʭ5:1 ow%K7Mߘ8[e^ JiJ19aXŀhp, eQSan-Rza W4^Xu*<=1dr pK28eL"ED &P{]O\ڴ$x&Q,e e$'pU|9VRŘtkkmث^k}r :o+e$tҁB#\346^&2 &uD9hCQd"x`WET<[it{zmjԮg}p8n֙<-;RseQjs~;kZ* EC%~w?Z j=w8_:y&(]Id$ ,ugHFJwdn;HОdIF8d$#I)oCˤ̙>gQsp"um_fl|=xxvux:cx! WƐ('@#Ӝz!IpZFM$sB^!Y =1T Ԣt@ ];c8 s님we){X~|Zh)#qjS`F22P"K+xQֆIK5n0.b0h@5c@:}Îq*,Ay5V.jX)5\ goJSq {_u5W''6ߡV/Qlk$S'$\hT6jc1p`̤3aEcNFYi<x2{{詅y^&׈`aappvԩt :"f%y83!葧)fJPagg >3rt~ջHQ0h;Dkq,~'%.DŽV}+IZԵYM8$NNQӑnpqB6/5i xk8& XP'^r!r&kJQ**TB$4H{Ndc_u}嘎f+Y~7NU \PE@S t eZ5ʅ .Sف,ъ:ᬮl<*xu\*o}a8n2yFTM:0hmx}2NP43iY02^JZRxk&1 .aN:? ٟ ݙ_ Hn׿AMƓ_~yG~Q}; d6Qo$>Lo:Y RJ^'El<5A`<kS.pl\Mof&Ϗc=&[ִ&쏾Oe4EPz)LJ=3Ż7/ﲤq⪵gJهQEv`+ 'Iew7}(7/3 0G: 汆!dax]Qa s.~2Xp|" т fʧ6|×3~[~cӹi)/kZDeW;fa*y$D>ob.:qhC$! =aif#DY! JYNQZ 9 LOgv 9h0~k %,t0j FX)R % t6/<_mk&1x G[fŸzq|Hn@9EAXf-L\"M?H}g{@2Щd⟾M}GÀ"ι@6g"sǵrZV_(iZO;F1x7rc`\{K *`E(Y@X ,O\?fxk _ X:6Y>?Euyj:x6I}Q5NSE0–8Vrҥi+)E@#LxWj̣Uh4BkM̨3hQ9b0XJBmJ+x d<ź{Ζuv}]6k8&(5qEY)D(v0M35@l3%l„EP!BHrv.ܺ($!s cT9I 6 6 e{5rFs$`禋M\qݤ=NRJ2v;\zCp+L:WI\&WIZ!WIʶpu`N[PxWI\mE݋-~&(%_-kl=$-PC琇d.tо P:IbHK$3Xl *LsL(@WHWrr`!WeCW"wJ(9J``ɲ^#,J*:AX ]`e6tJ ]ZEi*l+E($#l*XW uPꁮN4X 3 ]%J%E3xt4ڛjن?R`m:"2f|LF )E~NSJ=E<2;S`,-RIփK #ِRw!|0A>H/yMWc&eڡ]t%zjcXdCW PUBi P2:A"ܣ*,JpU6thuPR= Ck]`!I6t~pgJ:A!L2+T>sWW ]qı}+@@WHW)AsjH_' .·4|>]:JܕČ|+t6sWV;RaJa"F?'of/P }5~N U5~ts',Y"4,Λ^IQiNN-gC .& #ﴛP*2).QT >~7@!_IE럟HIP~gmՕOuM(a~=+zP'!/fׁqYMo_[L<&#VˤLQ˾`Q n\hS?J?w9(p |uuTIJFd/R~dh߃oJRerW½׏kqyW` IC}l5~yu%|lO[6i wwWG!Zs6HbUֆ6M̆RBħvmpHU,쪶+cHCE#4Ls$i6猑*9o[wiǾ vL fgKV^E,)Uxd!R&R/5eDDL ` Xy$*Do٫$yLR5̞XVR0vS JTx <~TtFNm$) Ӗf3Aw_z uY#OW_}1Lޘ6;mmn{yZnҭ5@Ɋ09_N&1zx?45靖uE9~{c@xmep`U{\0mzA* RQ*e7߭qDNOS\#0g e ZZ u^*FxFH>T!)YC(2X=Z;Ʃ X=RT`Xr:%,YydcbhpPNJbV$;Laf0woԥ%XaUߧVq?͈m.Jͮ3obCW`|4*j1rʘB8h ʈ*QVr:O73_,/@=30ϗT`X߼~=YD!:n<4F(]8҂`8N7Jaw$WQѷU߰]}?_%=mȸ):u?~;R?"K;(KL{1UU:\T#l9Yޜm>Lyf'S.,ƣ7ryUV M <֒sj2ep87WuG:4Ϋ/A;݄L'Pêڎ~hs{>-/* 1$`?/ԗ]ח;]:Z ( d F(,bp\+ǭakinI-B[u(7& ƵԐ@r4Eai_Ɔݻl2#|ʡǯ<#y ,+*}?)o)..N"0lP0%l„EP . s޹G.$ )I 4yEĠh^(1*0#vjtJ9`y̝vL>p,m=3{ 0 ^رx {<~ѧm50.^؏_`v`nmT-#fWdj\GʮE2B]Ul1ZdA`!K>F/ڐcDlOl1dB,06E#QR3e(VsƢB,SVSB赐';`-o9G$炂t&x(AJ=^ z, \ځ2փNU]ԭw=缾Tt~x{E󢾕蹞+C6xY"2FPaѡ5n\_r~/PһCh(XW~yNiNq&H2 `"R9 PTǨ U6Ρ=Xx!I%i4Tj4vCP q9-R`P ஃv󰕺NS*yOw:A sP_(ճ~Xp\"Ֆ٧W۬udX$2"LT$RIYxdLH&VD}u+4!@yS&J;0 | \K0Gѧc+y3M鉦0SeOkB 0b0hp, eQSﰊFyG~)C+`lB{ ~ H⠮!]SK2*"d(٣H%c@=?@BdH'q :'`D֖4cH=DL#CwemHLKq3p26з: tp)چeɱl)-md)9FE!YAJ)lTH `bƎhc,kNJ^k?ϊr20=|@A)''ĩ'ic?_?'|ʙkbk٣:&UxS8$'b#5YUX*NWΧ\{mSIlZW3/[[{ 'Sۤ^{S:~;uyȃZ#c<ˀsI3dɩMB KM&AO "^5}{T3d;~ {!+3)4b?t?Mւ٧ʲͻMȾ,{{q܁8{Y. M-=|v&ڟ5cmmu`5,l[ CpF=.3B­:$ U=Y{睋}aI{Aw:W3{{}8*hZgxU-9g4(P.;=~z~++œ[=}R6@4ٸX)u9MLd–VLU|* E?ak!-0"WTL(2M5p\8Zn-*|!The;JuUvBt#5ʑCl'?JS]Ov 8/ޙեR4DwݼSf]/.NTK`*\un.B7wz9PF٠ϤSh{V5dm$|G- Yfm2{,Zj%d5;n< >Dd|2J6d"[Dl6Jq!Q:eBD.2* yΒ"CцtPfc⬷MHCWs- dP C1^6x JW#eܴgqͳӹO?91ZG0s'Q2XPUp(+iA6Z*۞d#nG_A;I֎mKkn ;r{kan;6gsi|h֩}Kckb'gC=7=2di-Sv"9[J|0 u .zNj5ɉO#A?ۄNLIR9E=kt& <}Y#ΰ QY87_ee$#]W`,E% QS]T#t-UnR3DJV1[+!92XW0r4"E5G4զ-KaX"V%2WMY$/b; rV18-ҴAAoEe S%OҡMLm?eMqY(Q2dFU:PYSTZ h{z%%pt@R,^L QabcH*mg:ʵ&eiAsd '"g'39YN\929ΌE`cLt <#mL,@jR iYJ9ɓ* 6'4dkLpRH{jpz"ͅR"͟M/3xoE_GpkzRһ\\W4>WO ^-=N1Xp0i\2*͜wM vg9T Υ:+I'F,D;ẛ wFKwkRgPTP ^gwh)g~0z6=îM~Zͬ]>2#5L,sJX>7}v?>YQB>}4(: ~Z%aelAjnRw~FqSw?>>qExQ_\ƥ;h5zë l003J\gG:~a}|ASo`[إ-Zff\DyC 6a_'z)]~vᅰo^oo>pa?\Ň7 /2Y$;]M&M᰾i; o.&؁n6ft;<|yK^Fۻ)Sxsǝ4MvJ!URTF㢎V!V/Y ꋗ:0gC횷fUv>Gϲ e$OZuN+$Pi1&d41)26ҡ&*WϷc8}z<91w6>e3>9Aze]nv~ NOپaU붳mm/N,/L$-OIkɘ *c37%5'XJ9\|56O ֔ڑf '÷99ESKrL*&nZyC {f"rh`hWt nTLSQ[S$P!֦, ՃFL. &KW3)5cc׌atac3cK]8k5 p_p*Q WEY7zAguYG*?w\c[1:a10\Bd̄_r1\ ʱbh jlie.IB66QCАpvY8s$ gOCM^c~/͘ǢjmްnZ:gZX#,ӉG3KϺQ}6>8LȐYQ!]aFMK!eX$ YkD_',X4bcc[(ֈՈF\"C -)dA1@ȸVЊRc# -\@Y,g% lcT p \LH!Z@6FɒpKaؘ85+xqYgcd[p4^ldzHfyPC1ik- kς5"~H@s,V/C/>llvla}%5T!~q]!{FG ~̙Ƽ8ޏ/h2ro[drtW3. |_q W6P#;$zi#y^ Y9d,Bbpd42dɩMB ΄ A+."p5u(}:op&!nzṆ:M{!+gR kurOeC?w^so +/YOn" DmMQtB`@ FuJ[H2_d[SmT?Z:N]5ڥSY5)g1JWFr#)wep ҚΙ+2Ypk%.SvYKg6M-ߓ>[r>أ%b,`o A=.I}9vX .oD.DdSJkf*@ U VE&Yp?g0 fG$̎#HU4.饢y4Q[C2MNju`^rΉ6H c"9绚ⷮ5UwͫnQȕ kj R?PJ/Q T:-&quX4 dE%'qW~Wc97ypﯨojϛª5OK$0 gz~:5'ݟݔ~qT/sH5C9Ԗ[km.pkAy z@"eڒeL J<<4<Bm h.%*g4f: dΙx%M)7)A2x9‘A\2 *J 8irYoNW>-TO$BpCDԫT9?wo:\DCւ3:[`)btflX(t bQoUqϲ!d,],1z.YFԆ"kLVx9-}RҞP"(1Id):s*fHZ'g)R6ϓaFHf;!uE;uUĕTUܱ"f Օ9!uE+OF]q:uUpH UW?ҴS=n /ޕ5#Ɣ q$;GD?t2%-jzb&xY$E%JnۢU TfK5%z.~6dmklye 2և~M@9hM65Ҳ1bSj*[~Z]i X6?F׃һ=гa%>fv\ k3)DPV{I[ S 1O[ a@RRP&T(<_XYɍSh55~o?6x4ɛ 7LᄣFv՝7Ygm'~?MA[טuOW4NXKݟ{8e|}GSft| -ޙ:Z<54zeJkmd蜮E$`:3DhP6  *^*$kvxx4QB`MY爐,ɋu3tzMO˥^Rr^\xi-AWn~ĖPzju}YDYL,A BfHT2t #1`<"h6<Ј0k- ISI=y@!,SQ,`S„Xs#R(unAgW@fo0唄EEj-(^y kDXKcHlQv:r aXY<Мv>(b.!5M@p$#hޣuQ[ ޘmR7ÿN_-Qа 2-GwJݢO-t>ݢO-t>ݢO-^}<}G4h ]q>6ƜOLֺ4 -L2R6Mv Uo8SfpH(R)fRYiHGk\Hw K̶{ ԪvY/YLp4N_>G+[Ec y)]٤KTrz.L-L.QTOxui=S#c. 'g^\Nox[[e*'03 :93b)^inkw|n[3TFґD $:a֔h@)@P+QQsΝ`T"xIcBѾhi.mDAzd ST*gg6fYC74K͋hV.m)KvQzH1[Go?EЬ]A}rU 6owT޺ލON?nG#g5GV&=oR1״azY7S9zW1 3:w)RZ~r H wH΃VH"2!dI(xEug왏3٢tBzquGL\ð;?zD'#3a -Zdv_p`hQE][va7^BW^\-ĢK7X#WWJi3TUA6e 0IdlV!˪pl#ZsS;=9}yE4YۉJP<ƒ,M~Mr+V%k)hxh8R8&j+ڮe ~HЧ(9c~H!CkէMԵV](ZsFʳZ`QWyӿc>?&1S~˗'oY>\凿M}7cg Y'͟v~Fi k*n t4f6mh|<.p ( /oos6?5~oݎgo~Hhy5f U9mq[Ώ5sJ)hb3OhW"mK+ロd?$-i> /#Teb&.L[B lno;_6k7%\/?/Dm$e.˂-j7{w܌I UNl-U{E߶2ِn~=ɉ cՁ(U .)"ZKՂL;Zxau:@2+WUzc9of'Ҧ38MzQm !.I )^[:kܷ'GypAڲ'^D"4ZiMhB Fk|n4b) cd,Qbyh2}1^HSu|dݦ]5>M/luEqZƿ>ކ ]x**P01Jɜ :J^9DIm)y{Afo"! ’I:DAKpg8x4vsV^wvdM13(e4Hѹe9SŬn/'yHͅ+bVύF!5(dUlF{H&:WLHuB&: IeW۰;cZ"E: ',FhU B6\2I"F4$$MQ2cc-(0|HIM5K7Jdt @;#g,^~iӏK`6 TSVx]mշ7=6=Ʈ07E[ ǫOW9XZOZcwkTBЕ6Jnٮy',wb@; =#BJX4MH!uTC2 +0&" ʐz_ȋ>ܯYq:xTCc,}Y2-I[S$Ú \ff9\c!~58dz+ +J=Tt_Dm[zlWXU>曕`Ϧf%siZ |R o o]wvMfh"nj tMJE^oӟiS:NBp:T6ӿ- EN \ :Ԧz-pP5K (SFO-yeLR302tlrRîmT/yӬq=Rryi[8 ea?b|zju}6dQIkF* 8B |BR%DfZjAmaC֙y"v4Q3d2CnmH˒`Y錜=w7%D$|:Эdɋs@$2!h^r\8`ơ|`نP1KEk t{XPţR{Zȋbb7;质tiCgSa)^fg5Y f' Y JA3D1RgZ{:_!e@(%Xx8$Y̗1~ʄ%R#J=}KRҕEt>UU"0I ւL1_ƃ'jglk׋87 z t sO0> ڠdmy3Qes!e-ne35%5"3nBq;E-Iz Kz/& 岕.pWt1A?*t7ڵ5%06'̤HNB䤹sEF}gJgXؙf ZX =e.%R2xTcEwEic2 STt&f.L*xH=o>#۸_Fn\ |9+[sbm!ǧ:Ҫllf"{Q2鳠lK/AѵRǯWn2[Weq7RVǽj<~yɏ~Dk][̩.$^r1~vD46[܅'gC=7%%'8\0'tHr:M]j$y}z f2=y 9|}8Aáub(Du|(kw 6jRT=QpsjrfkT”!RkeZR*#FHy9C~kyeXn`t5P;SfM4RZDSqmt&uaoGK) 0FG΂FGSXG*:kj̳DZ: 10vBI|yfudFzꍎN1$m63]֤"-i5p "x`.߫ =I#WIgf`cLt?#mL,lHwH9 yA!$Y:'Έ Ba@O%(ilPjk CeU_>:=_=3_k3Oԑ&Ïparxڬ^Vs?ah~ȽZN}kX#jn\iHFzhy#j3Y[(Esvpb\?pz}0Sן>m5˞Y%}d,MpebU6͚jQ^V9}O9!2MNjun^rΉ}.siR~cL_7>JSf:85Z.0BȵGCn7:B닮~c>^{yo b~]\?L`]F~i.]z&5+lAR:>_b^4i&!Yr77ę/b߆8?L+>hv,A:o2p/d`Wow'xˤl 0w 6LnѮool{O~oCgj7Y-w.. ~M5^l m].梞_>q)?wWL1hiO^798=槔&E_^]BKb"j^REh*;A^M['gVq yę掠GF!Y\f"C6$K$SXNNnR>ذ ~m[[:m.a֚=|hj;l ąx. J+Ů3*%=O^m&U܄V-L]@m8\k{3@icAˠʔ~q+=W5_n;[{8]rtM-֤M[fHi`A\?vvE&g2hj%oUoL𠘬7mZ/G>+ρ7iM*Fjk=VU-wE @r}ܴ薖y@>w]Y?O5ߦ0DXX& [lR eKO"piBAj*z#꽼ɞtqgtoHI臚j-i'w/@'5 S<ȘE'Gٖ` 7WR$ceLIQUKYM8| ndхG)O M(!sT Kz?Xu6WB՝}?1%2`c.w2RZy (6{BVi tvqtnZ[]9ήptb BjSJj4IlȩgƊsP- u)oKDHZa2c vmYT}#C8E,[Wbi=@ ͡Hܽ':94'.xHܠ0IBBDv0`0 3&ohkCh*%}*Ś]1F /ͥ;㴫ޥyzFc gCsPbRuIm2)<Z9&U)L11% xQ.ãkaZR|Ti=9&Mj`ቒ^tUSJTf2hDc!8C1 N&,&bv]d}]),SF[հ)X9E 6ZeRN>'xPMNh|"6aQ.@Yӥ܂VŢd2C]4gʼnQd57 #hBZѠ ri@Z٭G/Kv98x"fjl;DE 40[e Ka% Ȃ@ /Z u.T4E`筍Ems8;`&f]@M02]5p`RP xg,%m8\րpC$*VGg%jlP1B4A6D(I e@{{{:FW{Fyz5|< ٢GOhwr`z$ MF6pZ^7Aip\iAJ)2)8+sm,$fWEV&J2~w1`Dhr'c<Yx<} p9,(0NDhM?y̐ڲh\0g5X#4E_7o4 BJI5\OoeĽ"j4)^n$LL:t`_M|KZKP aHtIm0Z{v[ Jcy^ڭ0:NU~?_ZiysMa&^K $u&nt#I`td̮S`ð}ӿ4e{FE:Y},2oM #e5FCk'n0u<;h 3\Kܖ9p7 JxKdL!u.$ds\U9 >}~AkD>k wzT. Kk=x­P@ʍ`{`}fBu&jD|⿁+IpA.5\'gN"0Fu^Э `\* ! -FJQ``Y >1)c)k,gDnV@czz@.s$-|35JL@$46p-5#;eD}0՗o:iM;H* Xk0m@ncEz`)1#,0² B3 _4Dҋ]O0%7a@D;9>X2^@)-b8eYRF'`~ "i0Z[M>Q"VFӥ16X0+&6w.P,wR@å1Z`4bM˾|5|];30& 0@5QwKBFx7dz5y0V UUIKjA]^X} g:JEYht32@"1Zۭo?J[d?.늘iMwT<]N_SVʯ-3;a6A^)Iv |?ު =&0V\\u<+X 6 DRZ\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE\*W_,`G#>yUY8Ip% 7 HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" &ksL+dGpspk5{+Xi DH3$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W*y6G%R2~<+`O^p+#՗(rZ֌W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\Wu Z:8y1*+zK[C^0t7Ps XZlIчv<rZ> с\_>J[X~(¨oexTN05߿GΝ'm囼0uַKEs0 *iՋ\R\1o~bg,h[)od\e@pJ_.Ʃ..n'8wWBOJߞ@sپ=ϡ'k4Gi~nǗiK5î4`~q6V8CvYQFy F t6H:{q~zKOR:-}IN0aQVΊӗPrmYco[-O`0Eg@n_vSXѶ9خZ78;hOWcabG`XZVO+5iD4ro҆6j`3w\uGgk̿e;L~3&Wd= {\NwCӲTG 3=ϭ̵>.nu nks F(\ W\s4 ڇX)$/0\ Ο¬̭ xU1J){쭟8MkߎU]~_[]md7\T\Tbkt?pJw&N{y15+[YWUyΔn10~獛-v (?т%u#QʨhLH/QPjB,HJsi-$kR:xgKi˯Ŷϕ.b}:6gz:AxvX}s<`>_wC7'{{nӈhPWoo+PZmHgslQU*2RƘnU#`-l{9i7b2~i;,OvͶNJL*cx vJO_=`sLQVW*\yt\:2ɺM@ ]p?N!/rhh`swtד}- `>-wk׶VcWd*!Ke @rD!r^ItyS4%眰qaqV\wpc0`9v -&+s5C<|h]lJq4P@Ň@Em/u.go7B*Ʀ2G.Pt/z|1k k`jv6riq|gź Rk;=l Z|n_O/9PEak OV^V (y0 |σ!ó+vµZ7/_jy{x|fUNm:u{ʅ9|S[{~Я0sWfjV{ϋ(m3Ay'Ca̍#qsÍqsv|M;89uR/g5uSV2.UNEH4i\?$i3;>Y?F{G[vOgn>c6N$"pHo9nl90pSڤb9.1b\uSWDZ=9d/BݑgoF( c䂖7_[ˢ 7@E^01^6&ɐF:Y32G˙*Z|P}'͜ Lkn)XcH.R51 ᣌkDs]SJ^`yRA! G;¿r\PMH#1j#2Z9mn,^m27NL+,{9#;l[Idce۶.F^{mϛvӎJOupP&M#MY.yrjvh%3eV*{Y)|e~:t%5Y4xkr5rAhUB*GYέg2*Ιoő\Z5ĔT8c9Q]8R]K 0Md\NgLӛ97cIOiш뢑;uV?u.jX+_7׍\{ͧ:}}Py16*b9=6xp,|RA2`uy(/ ^o> զeg~&(Fuoo߸zBFD5?-m<9r[Ze<+kx T)]El!'hDOU4 JI}cv)H/ 6Fh*3n %w*?q}7s5 >V ?|Hdmgjaߕ*9{5f ~JggO-XT^b/"Q쪔SyHX `.hKN忟ō,H\YA`3;L""YS'O3Fc+&ёȘʑHȓgeF$yKerƁUR 0Jlrd<3)՚1T t[79o />{Շo YS1~EY>g))dy).ɗ_g8_:p!9bVႍ0VPxdIE#|BɅdBJzp.$;  @Cf[т ʸLHbPj7QEtS|9C  ]5Ġxv&nr.s`ҟՏ;!?׬Dy=60g-0v" ǫOW&R=#^2l{ٔҪۘ4=qk$jmKi&` -lJ s0ݹjcQ+[\6݌zR nyB:;壇OdK=ݦ:^+M8%Txf pjhB$Cbnyn_-N?[\C.,I}Xm~؛w˽.6$G<EQ)%|LQk+r:F=V\kGDJzwktU"/Ty5leo4eL+Q[uNo=V[/ <^d|i ℙD2s HwIFZ:HҞd(>H%taZT(Dp h &zuog1ג]GWs=( KJ|EƉeiV4,H2C* PCb1%h.8uۃFR-Ϛ4ccaGm{!zå-X2F .eTN|l""L@ogRi&V9 H;Oz!r -5>@ Wg-R[-0;pH+ݪ:m[e;ihZ$OF2ȆG-ѮP#BK;Ʌ0Fk1qV@LTNx6i{Ӎ\CozZMOoy^B`0ѡ(ӚI;qMNND8eAOtD+fJ<Ϯ#7 [g>q!H0Gg(VDRQQLv$sxFXiA<Wʺ-Zt .nxa;^ˆgn.y (s=Kw 6ʄHXýȧ\)qty?U2A)_2xt4]on2̷2&1"򺪂]x@"e*X3V>=~QeBz$mOc:mdm??/;^ p!r}.)2htMrኜ~h:0ȴb9x`2d-,W̊P:\f>/ʶpRq9_]wN\/^mU h4`5Q:V.g(iٔGh&gǟ0dcݑo9}.2v_7>l`mSd!H˜G&9alsb6̕ Qr8ó:}meN/pyZO~:9ߏ~'F1 +]G!˳Ϭ5NyKkJކ*roKc?lL,A)J<3.OO^_aEHB'Ŧky3N2R"]ƛvw! W`Vӿܸ-}ylaټ0Bv{.{.Դ/DԯM&гwoXWgMjߛŋ(O{9:x7*"[=317 5Gh-kh?)PB/tDRr)0v~\Jd\3AH\?K4Y*٫=x5UPK佊dP~JUH M!g 290 ϛU=;-gـV,|daԎ\沖|]$H9m1i-֩t6RsHCwܤla~~|fUWl^_S\(,Ӽ{ğYfcq5cDFc4xٸqk^k= s"ߖj=@;nׄ)oslzE8- 7j[ytbߺߗ+`QbcqpIFfARh<:OrQ(UyLZX<(yEt 5kU3B^!Zm%KHBÎPN6xAdub 鬨Zv^;k;;멬7O/}u V2Yw%/BOD=ZNSxykG前9E: VXOYaJ%'OTXxybqp m_x3URB081*CXL)H*LdOݕ\]XӾ2Tv<;.:w*,`Y)IT&ⴉ ?sn(HQZ UsM’ezFXLrCB+L*Bzkt.iTpwǺBe8IT@T`.*r8݊?&T`L ٣_{2[0U%ρǿŷԍrÞ2 587s28|˹$N<ʛǪ .?tஆ_.qNh4M@Q@ޔ![)9:ӊsepG Q٢l(AJ: A}V&ļ7|qOMi* >gUl$Uq9lC?ǯ) cnjtM0S5ln~Q2e ׉s\>(&`?Ճ򼵆V[T?ן|>x{5;ox 64 a.pFóz\]`<t5*éQ'ikؒ[hk7cxk3cIe&uj^'@6ٶV꼓mJM3qaec]'iM1=In޴n9͆Nq:Mѧ>>}eӻ[\q#0>.mAho~CӶy8nڬ؂z -ڥ.{S},[[9CXn+t՗92 :-=a\\ ̿D׼r[\hR,_ʄv ej\+6^*nQ250HN>}ĽJ*R:s 1LD| z7&Cp8`#|pU#wS؏=CCWoZPt(!΍ |əAu6'dxAF99ƶ\t 1֞~V0qg.FcyANtު +H yH{X5c׸WV^' M{!J"-ڤ!H53#:H:0J8^W1u+tHELEkvIL9L=$De7/=y V9Y?BC_D0~ʆee'FƒRdTͥ%ۣWZٻ6$Wy~`Y$\{6Qf)HIQ[=3(Ei [4{jJ.8[(8[=H#ϰ+1T$1.E^!ZrYQISvL1*!jc NP,"RN@!9tJ; ;yjݧSrUG'ׯ,m0E+WLKbJ(r:uƁ{LJFyɵK9JS"ʞhrd{d}pYSȡԣxh$( r28Ƴ 8 5~4^|N;``rSM_ɡ&7=6=DQ'gM..7jzjN窽&Bq:|d`FϤ$4N έ# jΪP) 8 8ͥ7Hu,@6$@Ǡh>@ U*Űf숅 ZKc-r-ͮg&1C{9gz36|l:hԡ£"ʘTv,a"ѦnD)q6# jӎ]QFm:>DޖUD1KoQ)}mŏSqk Kbޱ@h*[91I;#@!Ȣ=&D22:1C!Bx$QblÆe` "ӏ]FD6  ƫp`$pQCH۠9$:J!䉗F̕EDk(ڐG)8Iiʃ,eqy#jȤIrTx|`blFϧ!.NYLKvύp@v2FGOd%2ФqFGQXÀT-(;:m(Жh'6im(' я/Hm#q>6]:@˄֟ =r (k⤩(^;ý#EWZqi}=9O//xWɎĴ|6;FәIզx=q'y?g ]rAKL0ǃR/285.DHt[.J`rO$bM޾nٝuC].ٵ)DKMq(D0]ڜ,BKq9.P"DDL)eIOp+i"8$KL(ǫչ`ߍLf1<<( gJ~/C;g@Ǔvl &wF[p(; ?J*1Zq(pe$p i*+Wy(pprWRqkK vySKҩrݥB>[" ]dn!}<_ ߽&Y4DȚҚ*S^~"[d 7\ZWS7m6.ߢ+ߏGqm:pGniR*^Գy:zs>t|ԝ,uյN䅴I}`Zܺ(ч|)'lFWMQl~:ߺӹ u1PNsOOl˙s8 51 dDYl `Q)|TZxbk'A990ʈ^#N`sTVQ/.rԹz-BHuI)x&7$p[H^8K.:o/5R:ֹ||=Nb$m1|=nuzNy|у?[z"iLuvxPUpBZ(jl-SIBY+~׊`;3Iu_jE(ƌSk VLRGp,DS3k-rN M^ii#|KyBE_}},^Z+GG ਣ(I"zgRF=P%0Go@J.1\K:3& Ar!4`!D%L0KKt}blk O<8 \N $,Ʌ=cϙGd@UTRۀ^"e2>'eY]|!wQ$\4qF("=m1hU`0EܿQF ^7uY/ǓY_2S3Bc". 9:BrpDy1.YA>M1q64L;zD5eWx׻>-!|h~횼_|cSws'(*Aޢzi4K%.W[IW ^x\95v`ủiXrtM6, `P(Zfҋm;7R(fp§J$UFv 0;(P 8f1z $Ai _'g!gH@]0D#u-Kp:3)wе5{0OQ/gvo_~:;Fv\+`mTa[cGm{!zå\k)PQV;%`Y0^ 0Hv@B< [尒R pEqҜkX/?r ;k6jz?lL#v;tk$«uw/q#+Ԉ$Nr! \LL\'BA*P\ o|23N=kg@ y&X"Lth8J9f8FNs|D2M(A"D'j@"ȓuY?. #'NrDH0G3kyJ+")h(PPrtFrVP1XklG !;C)*fa~[Y_*ـF t?Nga|k5k;F0i#`w!&:5WS|3Ts.f?ŋrVt}mB`/~[!T!@N۸%h^M(WSkq X'"]|U}hz5>E|:K.$M`jGaP\ WtR'bIۚ[Ge9pm3WUmFԚ\:V.Yƛk=\&> Bob~@yK#kqu64iޠrF*OQ}\O!:k f:_^u8o<7^W3j['|!ڀ/gUif⧄%w7'wք((/z}rfC5PrH_o0DxZN{zQH]z'%[t/`AпWwڜb?LopzEW=˔3Et"t4s9:ڹzsl46{ҹ\Nή;4Xͭ{ɏjw6'@YV{3lf]yfjbMF#7M[N%%R(1J|bǏO{礗'~N?7)z7Kq`;=Z4IZDBz:qVD>N )3|H$6{ZrUoN>p5;Z>eѯ/jfi[3D$>[)qh VqŤ::SL >v@Yٲ2Rq`U*kBL҃\.L@fLz*y-8-xden]|l9E-5ůU_d(t)F#(u|Wy-EҌ]]|Y*УN]5ɨ6>eS uO.pzַWvSuڕ0?alԳ-0vȭWALW_^\jm51vt&@;m'q: d%DH$;ˎlk ߊ xz?J셔d # 0J_# iB€øb(*{!^[R0l3he)*GdNF+WWEhvEq*+'NYAL JʙHh|; 29IhJ(uNg,*R'Ε0h]aƵ-g~xQX?/!-O_-O}LPrg&B(@d'!A"+MVCmY$+SϙIlEd1<t6)K,Ő[L{e L9] ȇT5 J^4X 4X8P񔢔:vMvl+/,mu4X<]s9A!V"<_I/U<*H-+ /(&>8wWS_*K\?ǎgYF&e.(CxYNx&Y۲*JMI+Ϳz"D9vDs:\;V\sV܆jdpG0P8Wԙ tfZZ:Nc^wf;k6߽O̢I99i7h]]e`;9JW}IӱS6l;=omG{ܥިHO=_-ܥ#C!oGWDQ`H<Eɿ\2Jc!`S>P>ze72->_- {@<L}WM'&O Ϟo˛*&j0lm`0׍ RVsnXhge [1|!ǐB4Ow_4X;O'ڲ^gC | L#lm=mK<*ٓD|ewAǤVZ5I/C1௘BPU95DC ]s^X{1HD"[wW2g՟҂E!^ƥ:\fj/2Z*ٻ8Б3BGGi4є !$q{  ku2Hݑ%W;令.2Z8CL'PK*TPB9Pi R)r>'QuUr,k"BGla i.k&~t[e;ḫ[? e\-hw-:!@L@J[4˰± F܌JxCASm墨WFFiSD$ad.%i9Xi%XK2EU/i_Tn [rNM;mFwJ%RB[L`<v"=]JAAA)b )QlHTE4Y QK0~G5Zg~}'-c-iXRo zÏͿS7[ <7b ֺ7u_̏?͏y0wn!VQlU0" J%9r[<bgezJyr%NNQ T;NjX'~|4;~?$}x>$ZƹM,?$'{vChZ z~1.`K2qy\. ϟ>0J9Ȯ]^\Gh]1g=rxay_+ǃ8?Tޕj"MMub#۱t0Ool@mzE&{GHA>ڥW#QD1xSRZI%EIQ%c&bNPGz`u1aɻ-^%;5*J69)hء !?B啢'N=q|zn͖yhNĽ r%Ğ:HZI,o%밪OyOq~5g3H5HIV{tHuEE@W<<߇ynl|H% \O  t$cT[OE+Iy)\RMM*:Zc*ԏ[/iwc̫= ~G /Dy AR&:lXX;va鰫鰍%0P5Î/ݟqsZΊΗ,;u#[uѝBꝵH룫%1GTKbkR>C(QKD BEa FU((bo;m>g$ATHч1-J_w; 4csͫ&<[HfO)}n\%91"cJUtJqGZYޘf@ߖ`u&"!YR[Y<$,f_Y̶gDͧӠ{qr,;t;t=m42Mْ¦8cLSΏ9S8. | _KDUG(Q QLeR0\Q!$Znu BgRShȖ D"@(lzrHڦLNd0F_HĹGWfX-lZ [8?_o e k|ffkfٟ82s77dMNA`t.6'Z{%/R.Du|[*\:{>IA*Yͦ26 .N rEL,^\iyJ;\Ҋy,VvҀ!32d/:l֤,ر̒ak՜!Dd:[f췇SV\\ XfX(["LS!JG58 C K& h4kI"LB$C[D-e8kTEY )Dl2Ȟ4{[(Cl]{L,l|rS<.6FЪ]ܥޡ9(c̡Ș A^x2D< zxx,v<bc{HjeVs'bFދ7=dJfԆt:oω6h1b7T_I +&C*:g5vHYv!Rj(w:]6>rT:&1uGR%D sXT"tVΑU%88k}.Û5i3Uy&n֎ қ߶Uwz/FvXO?.٧1vt.B$pg0+GT$,Hةu R|e`# )p s޵Ƒ2KpDx>^' ""=hF{ܧGZufEX2$\%&Źj]M^෪3[ 쎪Uw r+i*iqg \./ S6G#K۫;*ّﶏfU!,;nj 75/ |hJnhK}EU,i;*%"Z-9aWe2/~Ԫ=+JstW%Ea`5a3% !R D7= aR.  C2e-+-5DM242FwpFDSoUr1͐w{$MVߊR&2<:wa:[U>Tj[i-aL2fW+0ܜ'7/p&i 0n:+nF>7=dέLRJ W> dX毽wMg;5PLY;c&(?~OXUA g"`A2.O'ݓ!Q j! σDyW?Ѭn,$=iZHOyiC)kKj?Btm`s*E (I !`up 8xV:x=A3|'6 ͸-4#=04#Sa"XʇQ^0j+9J0WvazF/겸̶鼥? joFPlG0(i/% ^lo9gFƛ(Λ4(#)#JsƎ "|LZ3* ƺ9DRTDbTIbZ<Y2Fpi㹳j *  $-c 2ŕK_ Rp'ǣ%rXĖ ϗeFμʙbۚsg35,wFHUΑ|e4SFDB0Qkv9܊%^ ?x(Q|XŖM8SyZxoF>Uqkniٌ~%9K[FsrMaU\fzc͛؛s¯@\(LVWCM|b0Tl._P@!Fs\|Veo#^ā)f?$r䍣נP>/ 6ɦRpz|(Ǖ?cϑ 1L}2\ħޝ*QMUHtuu9\@1S"{fu-N/pcnpG^Q_OLK&IY5yS˂bėa?+Ÿ.GzquHYo>8%7n7567ac3ˌlQO8y.>f].:g6VgꦾR1`QZ38[a}P(2d\ M 3N"iSWެiTv7^O|v)vhRf~V Swc#n\xcFWa+H6.?oϩR/ h)@s,T<_sU&G{x޺SVhU8HGM޸^p Rde ja )}`&:饍 3+G /U38*Q4$#g1D}j8E9q hO '3Zt8~;}7:jg'V do+[rq"6ghwcOv~: ׈U/Cv>/C0d)U_]n;Imqv- ;q9烡RNProd[ g+HNz.KqO_N3:\wNq_m*1'Dg6NMA6)i($%EkP^ e HB%lc2$9 ٌxZĜSrE -=G\'Zl֎%rxu'Eu7vc|Ğ`'#(諝 ?O7We O`IMaS!|*Cpia uyissΖ rΖ59g&Z!BpF3 -1$.<[H@EIzf*Fi&8uP#EH D!jN*d͂ami !f?Ku\=żtpш%$rGb`S24(ɭ,e*E .ȉ2,{Pl$HwAo!R"Ahʔ) jPk#gB+)R]h ?{jɻ6>9M&i}O3Ugӷ]EWUN)kl󶨸:ph+PLA%pN(G +gdV)xR됀2Bd 6/ĭA|# nTX9>Y6,3e ,aṰpNQ!KƗI<nAy' ;;GldC$d a*BP~IY2 CeACgÕ I36@IȍLK1{j:݈Fðb6kcQ֌ڴCN|,+,DoZ[+- ~&2P/hBz!dH9jB bǘBxTfk#g7"~qbYLWlqɱX{;\C+bD5,|Zke#8PD#((PJsbS68yxxd3@<ʧro=r0[?o}{8uޏ%]$_ŏ#Ҕ O~ \qΏ1PGBOHRCGڐ%Rg-D=5b"&p%D;J#>ipTȬLъZ'\J/_R^7t{96Ctc-~E.zշq+wo~27o \}]"?GA2\J3ԒL6GѕD[Nz!lQƎL؁Jے#S;2Bt;`U@q6 '$?TCXUkRB%у %ev543k:%*x!T 0F+HS OT 7rw fayOZyx )*x䱰Ɖ'@ 6cpӸA1 H3b^x\˜^+ɌK,:JiLԠ1iPL&Źj}=^O3rLUOwAgAVʑUhfAh{չ]^ ߧ,mF ,]?.&W9,wT:/#q_mͪB,Y`ə3noj^].GK%h Ң^@Wo'iR K~Κϊ~}XX->tGJϫ4N7%ݜ y0*uPWX "/40LB1HQBBъ+I˛HX*rmm|:p/0D>qㅊ ]RѴSۜL`qI2 bJ` e[ FC h |49gi/xK!5 HE4:V U9..s3 pju9ԍ$fn..|TF) (qŃnc[e6,rznѼȢiv8ZH\{ػ6n%WXSth*v[UwkӽbTDَ D=HHyX33iqZ@^Fќ#c#A,,Yuv&)i~ȿp.&Gҵk uP'M s3퐽P{ro2!E#d7F etRj /U$ ^HX4$}uB֘c ,]F@4DRRP UM#-01%$yptI9钵5(ã/"H['rGcEZaN|VqG}ٰ9,w\|9Z7rO/:U/T,M!)FM)dp":7 u6tM!\y[ .EOy`R` zxCظ~7J[7҈V=e~M-3\hmޤM`lth#,{(E`}[Bz y>SDA12T#' r,U,m bR!U U4Q, p蝲/TH'Ȓ0Q%ZVǭq}2p٠ԥŽ/gaW^};VF\c[|Ұ+PwlmoxW7E+JC#,⭤о'O6/et \u|QIxтcE6"44Anvu`S7T~*?v#HŠv)W=ZYH#3TAx9uu_CG^k{|nݡEbj](\r $ TF탎PJ DVr2I %*T햕N(xg8πѓw%d EE%{iR:ܩJ{:Zl_5Vwb^ps\1Lh"rG3*f=9^]~\67Hچ3 a Ʉ!Ʉq!)Tw !UBHՉ>Zv2 11u3;oP"RHK ޓDN OD48.I],8X Ut!Gjƣ7N+Ҳm ..3^~6W.pznf󬿼=pLuK:\̗_w c[vA0ssu3U$KAsaE1S d̠mT4AaBV(S)Γ9!۶Ek3>ٵS3QXa-$SXET"ky$F,$TҰ 8ҰB>0~7^xuCR6UTHң%Ho"G T}xas?6`mr'>jZx@,T+GSծ{!J=&x{2Į] ic[K|x3L}j"$6ToH9M(srs Jw{BΕ.,P`[q9>lQ(ecu]E(W@{l7.+<' dLR32 XPDdI;myX. tMV$gRO?5ئK~l/>zNdɉ= LT I(T M(6O[wGAƶ>g>-?5@."t6)+,Ő%9q{7Jkpv+Aҝ] Ckn%K4hi=`q5ISJ86=b|UI.Ɩ-4w\KPCfl‡ȖZD@N?؁j0jj91ʤl uw#+ VR)>i`mXeur_u_;OnR[ufpd0x@DZh_ռ|mfFEy%8\˗xTFˋ'<ʞ!VYpW!J.HoLPrjV&["-RlQف!u E1F'p, lZHRe=*yppsQ4mJ9$omHf_U'0A,&E GTV7ٱH:KWjX=JHN?UYx:9] 3f~DIhN skk{{a >.*j] #6W)ץ|աF!/Ý5ag#v=s6 钑MX5AT+QIOY OkiV*>ɚ>˓4K|c_jDj*6HyBo?dqX҇ 0I8^p^o/Zh|/y~hp汏^Zi;``G+KNB-v*Jc:BR|Taҽ+kBWPts80zDW]U}+Fku*J#:B:"{DW *\}?{׭WQri8tѺJj_"XR$N*I55,Y*_uT>|Ɓg W.+Y2i>*e &W*Zua2XF\֕ph\VͻǕT=puʚR]Ap]\YZ+z\]]%?q3^F29qINIw? =*]? ?2 :`_m$7 ڦR ؤή ԡ+^W,ɥ Lm{Ǖ9pu .~E rx\A~u *Ewu>0p221+S˺w\\]!3.+ܕͲ bOOۦ2WW+Y)/+ ثˬ]KݼM%kW׈<ŕpeuw*2+S p'uܕepej/d0(W9Hԕ` Lo/٦6~ *qх!:3hr2Bm޿虏3W+f}peg='k„:y{Ɇv ٠eS72GBs~624`7cw&˸v[Ԗ1:YqM/\$Gs|;h _4I>TƝ=2mQ=u}L*n!\(.eper/}I6{Ǖd=pu Q6M~\Z{Ǖ|`W׃+jI 8]W&ҏSئVqe*ᮮWD\n:2>+SbTUIBJ #22`Zrw\\]!;WZj7R;z L}qe*wu"?•ɥeM-qe*B\e&֮LeperwW]]#7V^!J8y͖vJ\yLߦVwi>FL \O~R7X ؓ7z?)yd@ 8x$ܥ/&_٦/tE6{{jlȟrϧOQ<-+d\L j{Ǖ۶\• L(ڦVi︲Md|8pu)W&Xy\Anr˸+Sqe*lxzpE]\4]9Ln^]Zս *NSp٧pei+˼ LmTJ:puJ²ͅ &W)65J$t!3nrybԊ;Le>5*g/).+~x{22/Me>.dF\)~o/qw/I_"O% H{ȫoEdŭ6/'B\T݅6MEy1]h(kpu =p6WF+Svg LBŠ'*2q2\]!H5ǕAy\AzYWR;lSyWBq!\䊮+S_jJuJLWhwere3P+opq]&L0epere36 T>85Jm•upu|:p }9m ?Ў(cͳ~usc-`MmоxAa߽sM>ۣ,n PG=Ωc0ÿt@߿/M+J6 g^wǏߣq޼C(uѯovH 䈓Wwn?-ՇO|zx'4?8'O dOǽㄻ?c>3=ݵY Hݾn6߿#0>U ^[qa>:@wf'tˉy5xp}f/ /B0^ǟl7z;((>~M{'zCO/˪3#R> OU)EW1x\|jD¯r;}_?|;}QnǫH/ލ}`9z/r0oΩ+<@7Z8.9&/#yw)<'}~U4R]*]KѕRk0#잦J{|=Fw짅1ItckGńVE%骗Fohs!tH84{2s1А]ko D4Lge֢($5jѵ9F?Ku*dnd 0y@Xx)Oe87U1'Z^xD,}IBOɌa!$hǜܒobma^SFj).Id}/DG0>{1!hZsv/h{6xJ{l")tD %Z' qi$HʹƬXYf)$͉Ǵ(U4! b _u:'Buyۛ6eɥ8%/VDy%NVҖFW<i$eI fUFB/)s1nFnM[ _u}^9yDMCG_Fr+`##$3? lR(-99źT" P# %!BQ\cC]`N'b>YbLKɾ)pR4Ĝ,3Jfn)ĬR%!\*X8kzg@ Ȏ^G >d^DVfeohZi<%63%%bI*rs1س(pt~["kafx&^v)ӎ#s uQN .ʩn]ˀ 56 Ʊ6tQSb"Zֳ0S:ƺ џזi.Ǒ*f19)ֱ )7faVgUpѡ!ZnOd^F"U@YBkb' ܂mr.k@z`OQ !SC0Iﶀ=X <٣3Rp+fwe&{|3b9BA29) y}`4܁HwBnU{CݕG 0p(SP홉( 9FdA"2ᢹ @|49*Z؍$Pd%p$u0w\s 05hPI!0Θ2$U ;-y3Df:}nɊajo깡LEwVv)4GRF;)U BA=KNoE"=d͟H_(}8TB*UbOcDTgu]0% Pz*Y"f^O0K=t 8(b8[,CR@Idu8!P>vy1$T4崑A~! ;[UQ}/VĥNml"9Ex> Hf{B`b)s8$w&^xn~|a{n˚WMK-x fLac n.`J63+G3>P.bK?o:@Gv[,9*Uk%KZRaŰ''$ ŲE ])ʃk$jC 742)@c `,i$r{\@*E}5XK66x$;jxGxsbQY,ԀGn>o9 Ebޱ@T5B]6!Hb#O JvN7777n72d|b+|mi zܥROG8k"! uj KFY䉺Q~F,#,涩`kl9aQ vq2,Fn--ع֒Շ ⑲fx4v/`"ZY'%\+1cC)a)4 ĶS }F,;oֳbƩ1a>'"$_T:yXNr)YcX0jL,j1"#A:ʊm4%ab' U=X~zBfm\ Qȱƀ#GI16S3Y@w}V8}vd/`& T qES)~c-OlykPy[>H-v;3A]kD8`ͨ@iBwBF9 ,S(-zYP聕%y!w{SHն=YJF?kkB%ci>#I 0wh Ji9s:k̖^j`Ut\$b!s ٱ:f#ܤ=P T$ٻ6r,W`mC@cv b*DܒĽ{Kˎ%[lUEwzmWXE{I8D!ܷQSJ.hF슄X`3- \5],…oGf-ngYXA‘*f!|qkN}"ktJԺڛ ,CS߾}f}EhR)]NSqEϽyN{ l0>YTf~YOglv1ퟞraOS/p|9)4N׳*A}~MU3_,lΗl ˬw(cSѬ׬ao $>jBSfzym(g*JKJ ΖWwf[DҧA $_ɐ@GR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJUKJ ls"9Ђz%@ǨL>,U#%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RUAsKJ rLtG p9 z%@ǨG;R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)W 5%%֦;J ktg@+:B%g| R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)G zF[.5R7ło@S)y:_wK".!`ߝ#f97sm.!J!Ht ¥Ua58.6 ^o$98[]UEV2H#X %:߰`c7ױ 0~N(%g~'UϹ6ڝKW2B\tK|BR]!] \7;DWt-m+DFDWCWRpՂݯv++:CWVJ %]`]g ZB޷]#]鷦KCݡ+dWJ4DWGHWFxL u+th9kP;$(J]Yw]+D+M Q*KtutKt ]!\ә`֯Bm;hUK]!`םuWۮЕ}J6'4U#/}%N nߨ.١ظ9{auRh< ҍ! >7HoU/_0~aY~yTէ|zNp먃:_*}qGJU&zr}&esA;vP4URXMC+f4ZuQKsU+ՠD=PT/7}bZ-R jFU9txlƝJ_e|$;aЁ܎݁gvOhaƝvD)y9R JHc:DWXqBRv6m+DٲDWCWf-;DWwǻ0ЕhNh;]!JItut%3!B^#\#BWֶ޻B]!])[vb ]!ZNW]#]i5Btm+D)Iץ5XB@ݙmdc$_ǙWz笆?6EXmpD7F^"}߇Bg`\+?9s_W>]y6^n~/F6ގ 0Y$Mn޼ak2|9\cS{ }W[eséw?h:_iѕVϬe0;]>h?76+{Ϸ[vO@-6j/;5f38_/yn`Tt.+u*}_ӳ $S2+xg Z2T7Z6Ϋ4` /7,`͵u2Sqigh0NYsyt9[ʚƗXz1oޗ2 ;|ZhJ>n? >x4~^7>_FՏb 9Z:o.M,isKN%@/j~aS~˔ Wy_|zK< lU.٪+uu,Fyged:*_? qx~:ynxuY>.z5n7t|x&ٺCq}ͻrA9@-~ !z_J<?l'Yd q0e;!b?=ž0_!_~l͆ܮdP_v{_5K3@6<%[eֻ%lJ }}33f=}Hv宗/<_|_b*=YTaRE*+ӉA.8pp&XVɪkhѽ{dtnO_>PX\i2@˜tR9NHMƹ( ܃¥N3=L[J(Z8^1J.j5:B <0;~=Ḙw|9/J{Ǖ8 }m=xg|r}16ti4"xkpB.`g4~xs}Fooz= y'^ׇ^Xg=5y5g']{4ȭЦ]uڞ5MOj1TM1u5O={l nMբ>>mo?b\9&wH7e~LF{.2 >l+k⡦,2oƘTlo=؁%Mlzz{O'ҨlNbPA2V⏯7ﴈsw9œpz<խ-,C賅MswFjn-/^ >G/#ȍbmx\MJ&'5ڨb*eRP׊9W[UKrV,;h('Gڼdv&yzѪ٭E#yGzJ<6CHל*%Ъ uJNEsI.БDh'DaIq6pU1Ĥs0*BRiᴌ1GY[ZԮWPjͅ:$Cɬ:UV5 x"F k,Ga?~3Vk^ɐzBj}4A.\L/8?ܻև>Li7Nv v*wC[va nA[p?9wj}=EN.Ƥ\wGl}J+UTŠxNwql铝Xx*U*qʮ$&j46evII-4fk\bɥ8~˙ٙie;ٷj{WXϗ%-Y==8QgRaYL:92_b*D!@xkrۙ"eTK:SƢ(W NQAg I8| Ecr=$"uܑ~;C׽qrgL=/[7YL䬞=~߽yLH5HҒ_e!`Ҥ4Y@[ёCoTFqS>:[gj? ^y3>$`0`(uJ &LUw.u] m@%$Ix$48 yk}h*t+$H*vk)_qR-XqEIXVB}00i_Dio(0FEo`7ˇ}Si_vt24xe=7#V8a4ʞ;)3_FTtvww?ӨD*bjӋI5LȍL0޲mYpW-p5r%1pl2 Q8:;f\:ⴂZZӇj"j|rLW()s~&GO~<4{N<:^dY`qN7Y"hI"sC7}5xqw1ͣLggOz&ڽ_rC>쟾'&-y~a+R(/ 3Yi+Щt5.V|_e(矫i`cz'"Cڸ~J-Z}w>vV㯫2 &YەLx|lr@ [7Tmu)H["D .)](1k% %f Qz%̜|' x2U}ֿ^v q䊘(CdDW|r9+FMw[]L!w\ѩtVe04{ !ՁڒB !A(@̾{gZ"N.9-TDimj]$,8.IK$S>֢6aɠ)rР V. h3sYWق^WwS]#Bq_4 4]4:RzAbD|cՇ'Xwg'a7!y猡ЀNQe)r3:Ǝ+Y%s%?ˎlk"k }D;0" і=̅ 30MBR=odN*%D IǠ14̅|s!N w;g]ݐM" ؉_tя/_҃C%%co}2ĮO:kmRlY<4r5"쌓s!H9:r=AiÙ0⺧g4v[pb0{# 4N`It&UNPJ@ *mH ^{LL|4ARď3 `BXII;RtVE0)BMCqkUY=݋n_muDt$KN i iER%7:ZQj*VBHڂ"1gHx~kFGd .eeg n.}43gad DՀ19S%w M yH)#ާM3L6WZ(oX"ilb9 $쯠|›*J] wntòrgF7U;LJNM1ԉ  LRVTj 6&-Dc< >v@>6T^tΊRW:2f0h|@η_Իj*&PDWKt8.](’trXzHlu"mҽEolHo"qLqtoaYFD % (ş(*RQF+9}K_ޭ]N7_/QTG|ѧ|::jY@n4m~= wov^O+T>ݯ~@%UZz~)dҸb˒-6{o+,Oxeg;0'bZA0 ZM9>;-l!S֚S{Ԁb*HLf%b1NG& !wЃZ|g{XUfeDH"))*![<&+A^ⓒBM']RimtD6C(Fb3ƒ$ @ENkf恫(tHnS+oi |EDiB،(md@ 穨g|k}2i]\`{Ggnt:߷|_~qIv1%FagݜY''F) Zf\6k;b7`>%cb(^0.؜ <Ҽv@''O'Ǐ_kQtvBH-UVsLR!ED AÏ{XbTZUΓ3љUH|L&Rkٌ֓q|ZǮ-P{`Z|@SHpMH lͥ:pI% LBu)#dfpM:{ƚY9&Qc*6 :{l-Q̜xbB]Q5FD5  ktL`T@"ED>$AY&J)/p`m1yñqFP:yJř".;u"9k}Hk43g3"~:#ⴛ;!u6k%nzEm*$U9E,,)g(H)4>p`[]4–Ya+6.CGn |YOmN&Y[),l_ُw >#eH t;-tbȺӨ9Dwi CKYq0.ʺJ`SVH#5:EpYN";@![a`IKK6Z0"{[.<5> "5T вdLNJbD`J.{ Vo<2=W( ~}eo?K_ &_0Ѝ >E}Lus?(hܗOV粇g*hV\p"fur8 )}G:eX$]ީVS{n:co:<šE YuXR{TϛJfn6'OrtzRG3Oz.Đ9+1pTOTdۼLl"nQ'([6ky 7ONVf 6ҞY8EK}³u0Js> FppzV+~]sdYu~a=V0$5V of]P^†8W=x4`vW2rwFVmmR$MѩKE_Nc6.3cڣٸBP2k]`T +Ͼ9|_/^pɳo)8_A̢lAvkl-|R~jPUPߢjX8|j;h7]6z?UVVف_Nq6Qv0YZlxՌ +\Hף+Nh7f~^D먴+硢Gn /%* TmJ>;GW-Fk$e_#q/lTJ'eၛ`sV2",t։ EFzhgÂJ{há9``U ILA;" 6`y%/ +`xato:8hK d1l1i=t/; W^Jxլg"&&#\&SIhm{xDh&cJJ* O(gvBkS@GWCW k ]!\R+Dk[OWr+E0D1!$pٮ r ]!ZENWRwg"]i<[6#5SBAk]ZJu Q d]!`%+em+@R*yJvl]\EH*tho;]!JA::Dvݰ^}K6}g.|+t?{qڴQ\˙ȈK'<?ٯ=&Yhޡ95!}bDբB޾k>`XUM(ˍ9)bqRb2{%ܼ?i]kT}ДjuѠ+lnpq3,jfS=WOx>(a $oZ+ 2\k74Bkf(Y˖9;CV" ,T RvBttutŨ1$DW2 ]!\R+DY QZgTZ]`%ӡ+k+DeGWHW_)$CW&c]!ZNNWRtC+ɔ2!BBWKoOW3 t+l*thn;]JM:!\DWXt|WT JvB]"].Li4 * ]!Zz,Wڤd]`x2tp9K$m JZM%24'&Rc2ao/^[1qPR_kYɍ,(6? V t4~Q筗R %"M.ev=]zͪiM<.ld摊2ؼK&Y+t4[I^z/N>V"kTpV`ږ$pVօ2gZWτ[մ/;!AzOd$02"0'#X N^d!è+F-w_3Bw% S_3eDVrz*!6!B$CW\ ]!ZmNW؎BDWXs ])[W->+g]!+T*thn;]JF!ҕaS\oKW>+B< ъ/] ]IɅ ~$»++X*the Qv+%:!e ]!ZiNW+- 4!B: 5,m}Wn1xte2%20M++U2te` JcUGWHWVMh}neHf4M 5 46Xd|vުDä+P?>?P5 c(CK9JXח-%B9ʾu424zMN+ʯzzstyHU/z7 ŹXbj$]EQga}5Nl~/5@cl\'YICn|h )@ 4ؕu T Ɵ*):JZv$*8UyZ[_GY@)0nz=D(+N3za⥪TF,?(=M%7: @3Φz;4*~YWvMvksU;Z6;xQz$J4*Wo6D ^?j ,l1*GB 6ʒ3%^8V`С,|pbH8)Cdt6B[ @Z&o7G#qٳx8n>e&nzx|~.K o>.K)X%Xe=JFq"GO8YI W25)(!yemeYg\+絶zVP7ZP<0@WD TܰFgc/'#p>cq6˪[ūC!f޲.,,{RE Plv1L}9:=JXl8]/ fAq=2}/e0u-og{7@ fYJ:TAc΀qcwqPo[\,!":\8_2i=s$ TD"%[9~!EJRDIIJ8=tF`t5>(3NԖ}J2/N$ZRo>[+ÛM.>oa/T,ˢd_+׈kٿ.?~FEب$-꬗FHXY#$G ^x\Ic򶢼5z{ >|4~~/A;ѳYb+go$Q7p5o;޸^*yy[p,y%RW&Tሐx霳pvb!~X_)%3%34t!"$YS'O3Fc+&ёȘʙHȓgef$yKerƁUR 0Jlrd<3)՚1T ]WǓhV^~~yWcl)*r_yYRUMtY\b( ff);7N2ԩ@X갚J<g񹅈Ghe.X\aGT4wE~G$ݻۣۋ:'Xk0 y0ZPA<)IW*t6D3SPUDBQ @YH /vyo ߟeK)KĦSBgFkfSC"(:ڈ,Ctnq>"p_-2r6d;BOsNϟn6$(:MA(4jmR@K8QѲ\սe^N7%i͜=U7._̶Ro|g]XWm%an6z&RQDA)G5yz&@!xz 1!H$Sx6% DxHDa,٥@ot|{wp~e%B2#sd\bAb[E\J8ๅHY&AUI!xx5hasN_&<ߢ5qCŊ~%m@eYwzEAg6/ⰾޯ >:Lj" "C)//K!JxYJe:R7H<H5 ZIư *"Y_8@i#7~3]Ն;( KJ|̇ $AiYșdh h)"TvARMJD HǠ6.8yIeRaӱ06x{ƽREP^,#ʀ2*'kd^QugvbX+ 3 )4* H;Oz!r -5>@ Wg-R[Y i`v(^vUNsukl\MM; -^iDB L.Y$-Yyg"p[QNKv h2= i4=ym' m%DNk&ƹ7yt"i.H 2,O[̔8yܟFng/^}$B$aTϬQ*4S$ÑHB)N[$yuZ4]G!mB v`{^jÈg ݮoe>`gFF29p/*WJ]GCr* FΉ/fq<:37M;̟=nc_}-35+y3 ar5x%bc9WT{7߳i5e6AQY\;/9}߭R`#=bِw7Ć5Z(Y%UNJއ*kOzsIwz!0TB !( 5H, qX^L83A-I3foa+v[2%j&>^JD8{ (,򟮁r|+{1FJY͛cy|VU.}tD!>Vy?U'p!rìTUgTY`UMޫHz߸_ȟC-HgS`sE{”xaP!ouWկwB.Fv[=e|ٟj&KZr]|[rr]!E9׿92`nT>n0i )t?Ն)1tm5}lԇfgc{ QJدen)hϣ?wxҔ]53MU֯uoX&w knl|\=8Jq۪}n>:ԑ8<_g~jcKU+)s $u,8"KNJUӲ{ħe*׃|^"RYc2QE]}C)IetitߕL^iw_{-iwNӜSnx-+5N::f {eA$R&2#P%246nj]O7ϴ,$/a"+F"pd Bmw B켾I<8[wDGUL!!m DG2RsKTBktHEr'ͥU~$rC1iyh(g%DjiC<-Sv!?" M;N8G@4@'(jL%SEN[_cR=$8VT/|Ju 9zc IUzm??-|ptt8=P ThMDf8w(X`'J5_Ŝ ;Ui4Kͳh8wtyf?pUF6:ȦVJMGRqaoUDQeqHho4wە*U8&5? _|OOO~:y e_?|=8FkAGj~GӶy8lo,ق;KEmʙt[)ǽ_.`ZgLpes= 0J?c=_Wqc{E-ZyLay\ؑflP-$ᡪDp/dc"MaVĽ|$NoF[C@sf&  Y"K^ij~n=mղďn,Ū֑ 9d:}4[TyMтFOs{mus̨mt66!//Wϟhp8rg#8*AL4K#PGhMdJk5)R3h\sS˙Q9;:Z_tw!d΃N'bU[|w+TԵ/.Wͣ{g ?j^Ũqq*3B2%eޥ IMg6P.(q'WWBjm__J l)azwbON^mwп/OV|fo& <9(c,rmHo1f%?VT!`kA8jƈbHR#\Eed6 "DG׊B}7 ()N#SNq飶ě"32xӍ̙Ru#cmGdڰ6 g5:p nom0L 2zpN|tz_:b#"8M!$iDhHtC%&UJ{5J^Ħt0){XP` yI5EJGt`ZDbHUiD]GlXP6qͨn$U KDz X̜GSAa}+1:fVHx8!Xp7VBg 4d 1- k|@X1k(XG k#g=J8.\cAcWD$5#"iAĕ*zňV#8f΂H$VaE,%vtybED0J@7Q=)DXҠM4Ij$fXW3omG/7.d묍KvŚ{w KEڡ!H\prl$ Y%ADp (` CT;vCV3&C#7 u'r(7pkQ"ss+rom_ƤR#ud,W)?ͭ4.\5np{-D:6g{\I=S0D(/M6Bh9?/r}k\~}\Lxi;70.!F,B0#BXYL[SFDD b &T 1饲Nӡ.+!#+ᲩH9JG%w >1RߘE/f'kQc e & ¼&BP5;LjnDh o3Nj9ξk5&JmTgRapw׷]b=YGsQs-9˝R 3Ql͎l->ߵZUx$^CGNAC!fH ^ wE%Kn-,3AY1}l~ 4۰K@!KqtMf(/>,4 rsهeS_Oޗ[t}QVh tj6CO 1+W6:'IJD*`:^'ũwz4:EǸί'q]Lc`E09sH[ "Oǎj)ctUkNA2Ah.HN3f4j|5#,RFXp4FSZq߀ĕ%9Is97Y״>{,hf vF{QOb`=ċOκ@H,0P|9Vq(72ˌsS7$O䤩wIʔHI5D PXGii* kd֚4tܭ Ɇ~$%P A7X> KcDw׆<~^dnv>,Eߍwۿ ӎ%|xw7>\q.P7\#r;ois stE~,?@B"\fay(MYl6OH> )"r?J9! I1f4@V6Y.}$'W*$_WIF{LEI#YGbMXr"0Y*_f\ 6f9FyZo3IjV]5)+I+z ISpo ]LZP:eS*znI5vhBJn^}TLt`gJ~&i7b-Qk*A'0DzER k?.iB|SzIUt|ܾ[B>5P*#fi U̫JN&AIez}f |NĎTr^_^UE^zOޙ0P/i&L=Xԟ~MAG\=Ԏ7QcZ2(PZlp:+9r&W*껑) F$xL >xhΛ˖4xj'XUwӣ@Jlm~գӯKEVU<70۬: n1VͺOY%| UvRK tS,n41wvʧ=vzE*Jr}*pU TWj, L8J䒓D-WJ.zp֜\1Ff'W\"N ;\T6 Wȩ2]i'Ė$U|z޻N~ÓY֫Mn2\b;GgP>,cw7Oﺟ{~-w?!Z^Xوi[30zKJX!' 㝌9Zz=fI/i"#3g ]&8~ o7Mp&8~ }r }[0"EjWa|(Ef_FiL 9<׀uʁNhsPXw(;tZ|\weYלxjjS*}UDAWYR l662$9J" G'wUHXG"\JlB Oc*5(K)µ͂Tםֶ6r6&hgVIV%zAc"DI"/Q'Ư7+ҫ&i aәTI*i\;&ȝ\ S|0x,CG†1Y \󀰐꠼kl Ãuv STI&HwD@4 @v2JaQiƝVc +"XAPVd]GAܶd(` թE 2:f-M?  qLP}FhG28B*DUGA2-07,n 5T[& H5= 5FS5FR %F(yc Ք9Gf!ջv 81:K7GBr4=E1b KUF)E k] pa)*<)i_;_[ُaZR~NKc3>|j9z7 gE-6`M@tiy.&)$øFU-1g|k C?Δ@R(Q}%ZۻG9Є:ř۠ 47Q>AL>qG_MŨ/A^fsΪɴ&ߪ7oуYl1BVN%1k;j\q$ Xo2}ZSK[b|qK˚!˛|63(`Ɏ@6۷q'K4"x}9~yyX g?W*JUen:s`ǿ]}7W㻫7D]]Շwo` e RMo{\d)H@߬DnԨjO(՗/'F`ImRddCpi%a ϷYXggB+eH!îdơCMN\d h}Ro9ңDIo* GTQ_zr T9v0G+(&%1H &%K#ArSS)FAe9wd!d84 9ԕJIn S^-xgy⑖"Q.b@e]\\rk ?+(*hoPbDLhNBゝpn=!Lhe{BQdTF dE)C`H,~I8D%XLXb3@,\J`a߀µ-R{q.&oTn2}FSGE1L,ar!f` """+l@ƫp`4#%6(oa^(D)0D`,"ZCI>׆qS​f@ dzɢO 2ФrP#H(%01p/xXlv0pw_"~jȽl~=;7D?>Q@VϨ\ ,󩝛KJrIYJ-rI`$);t+R*p.T vJ,ns*,2kXq]FC ;FP<.z8mXb#&# Axɨ`:D\)wقOz%AfQΙ_4BKz:vF1ߌFAnp?{gӏ3~Wm/GIasiT.'Ԋ >VѵB[|" P!GTltouH/D!mu>J(reHb+/E%yZ/SmXϗ]\T>DZ@Ƚ "D锨#,Jrk) <6W$3~VnXZ}3Pk$s!s\a!6~zh׆GtVa>3wCnPqt,#ш1^H/pdkP))fWjd6:?}ayOyRˋhv .} %OU%bquqJno=.nw-Vn:w٬uAu/lm-+ln]w9wtl oͦClYaEhn[=_7yG+-χl{gu{M=wt(Dĭ:J"p8)pQTN͛:g1uBJ(M\: d_kɔ) 2h鱗0 r>'r@eEBu`K樊Jj -ヲCYmVZ,zPGQ~')9NqM#kGW4k 1HIDDѪV:It+B.O&xg7B0@ ҷB/͐q3csTcvanu5Dџ$W(Sx9si| CC9*y!Qai=ZB?&c8/ 9x>|6>~g6dzE#l\8H".Ѡ\Nm\(Q8խh#rq= Q!n  0$&1N4h)l@H|ާ NZj @ Wo-RVđ6p7gN6!ʏF&g{A6d؁-v,Q>'}-H% 5"$g:ǽBBŵx+N:ZEZxl~q=4B`0ѣ(ךI$;qFLLm"d`,cp<1r<{It $ ~f-RNFT~jd8= 1PN{y7J(:ؠ!]B݋~" Y/]].K7 6ʄp$ .WJMGC_ g.FΉh/ `0~ȁb}lY .jKa=r {>9z՝{$Ǩ,r}}{l~MC~cG #&.Qdmq3ԼFk٤U5w5p%ߏ54Φs蟮蛟gaT&j\yZ_γ8ϦmrI3ȅۧ0omɬZ,U+~Mv QX&&*XmW^N351`9Gx)K;6'^+]?7~)-Atah4iʤv&ɨ}aJ? MS6^|bwYѸUbv~ݤaro W5{| ^,ge}@__55_ [CoԴ k5S~`OQ.Q?6Ɩn.8!RqBNll\n ;ގj)}4wfOJ=rnۺkxdhD{$]vm~Wֺ&]lFO=.)RY1%P/;پ\vdwj\1&t dd45xɓ3Cߛ\?A(t7Sm ޖuϢˋ*EV`~';3CT4D-\D$D{8Mڸ Ɏ]3i8 80y ztA^{z oJM[TSѝC ILkټ@el7^9sw~w~p.i6 6??P ?^_UaZr\iU$%RzHǿfu)Yd<0QWoۃ^wf,QG !^KC'7mOvzkUqja!ڷOkƨmw jquԖ\\Χt6~v/V[,dq̛ƶRrSp>B\/ ?zI:8a2:swqNiCK"L!+ր[>$A#v R>h .KJHơ>%{uokOpc5BE T;Ey\o`H{5 %*4ZGO"c* "Ow-+wVEKQp&Z 8|>`&EGfL*q585N8m9 ץ dB?L>76p4(V~Wx-b;0y$ ;x[su#bBa`zo,WT }Ց˩=_^2 8"lM6=>&P&=`c&;TFb8VjGSc,R>E>v\V?3+ݾ!8-A㔖?z}wgXڻv:Q+i,iT`80!ӸQze冽{o}f=o4o=_{-s!_bcٜZ[ flrvk0Z.]C>[ͥyV=f$7e:߷z.J9_*ׅq)>7B樱^۪dӛ$X߽~6vL bٷ g gύQFKgݹ}]ou[79`WZ/ZgSZuS0y\K,NjKoKub6~MY}n.݀|-~#{c1R=Q'rBtR!!c6](IdSv{6p|5Bڦn[׏bJ;K,/׏}1u2q^HAWGd!>$Aߗ@a5L H?pjcׇ]}/ (xRr[쌓)_SઍSbS5SܝwE!yoObjtJ {x-mT0Lw9 sP_yܭAJ w2 }&إ;DW,vwj%+t5Ъl;]++قzl ] ĻBWNW5Q ;DW|AmWw&Ѷ@{ztbl+vv\3t5Jv(++/!`  ]jz{ztDLإ;liwˮ@[ Qt J$ 풺g+@~j nOWT`p뭌'В5NW{ztK炶 |yK-7ލ, >Aq2<ϳU }4Dڟ),ţL *^'NM(8987vH7usk9Q4o"ߕqtC7F5<x]D~/CWC[UcFg ] qw p BW-mRxOWI;DW,}xApygjvJ5fOW~IW^ðMdž~8<#ݧּwF(s.l㮏\yWMG5A_.J[[x*{ 8N'/D0vyԸr Ӆc>pϢϩ|8sWVrTugoO.fEs* 'j~ĴvqV{#|Bb2G$ǖ ֺDfRMVmɇf ߃&MxKuՁ{b_?4O{o Vߧo6ᨶ?r '[i,&UQ C6Ol}tmeꂻ\,j#Sc)T bKʹRu.T{X;sm# [sd?4pPjeiTXsm"0awɥnl=Z0Ę7ZÒh,ԩѬ6VJkbDܬ0jnWzNR9Hva-V)Fr#H9[R kpu7@2o7˩cO´j-g2v炅5Ccv_BP.rb^rJ&H0^{;,˜9{:3d[M1;clV0]HF5C*ølK1 TsiXhxT{W%vI,c,30 l B<n@#>ȏ$v0vuFESxH{ V@13X2̱FBYvxUbZ\M%.~BaJJm:E5=\6'glQOp׬QCvM8i$Mqu~0)aAz1imDjD$Xr!@}ɰX8f"BmG.*5ʳ`O6XZ4.*M`ٙnB3# NRcjJld "Rk ]BqcCq֞E ޥ6DmG܎`֙$BH4V=`%dgbꠢb |  T9(tԡyšq4$BrܪʃCS)Hc!Ly&[:C8ӆouC1<P\Z74dcX*z|暋r LM #+ApUfg/Jl@4lF[PvPkj QQN=Rʒ`a}%C `Xyˆ f4۳ll`~#XK +dW$o;!ʈ͐jPoȻ\?<1$u6z"1K]`f46]  \̪q)q8dR0 35HBhzlL /%dUܛ썁D]f")/:<+Ex@Sm ()g$ȯER'Uc܋.RVjHxa*_NNt-v<_rsU)狧^Vy]#X DŽ1*4FGe&]ɤ9 $]IvMB+ud2 :"р`d$_(Xv#=h5'HiD.H2*#/C Ě.΃e-c!y&Ȓӗ]dUʘx 4tXdujP4E]U7` d-0۩y.B%vCyjSLX .+tmP0`=JϏ@^,?{Ƒe aIK~f,&$_&6UaԐm"}(&eQ-vtDSUO[UIrC T§w!}؄`jpjy $$U&YnAG @ܦnt  "3PH6@kHhmǚ4NZX< (tg@ @$B2БqqAsF!tJ36R$AKd@)Ceq@!#2Ϊd%CʰيZ }lHYcD+0 ҉6B .w4CzЖEs4VԀ,$ekڀJMΦ޲(&" 2Mw $a:J` آ}AWAK%L0A C6lihh[f7X~{yۜ:-Uпiٴ9QfR[zgR`0u-mҵ`3 ^ JI ʮEj,Y:jm NS!Jզy-1dFCkĘr6ݽa#[TfĶV1@s8pS"Xk!9ul2FsC Fqs$ޔR],nw0RAA ,$hT Fi SRnEwQaKl+dO W'M.1 O. ,N* S7BE 9FHmb,(JFb2w1Q;p!-и'GLF4*e~ ǀm:m٭h":UmSКI6 b2 D TS.4!]rw:-1v*U!*>@ifkA&f ܍F8"w>Y8H-limD\Y+ZiM p؀Y2=R4Th5zB&aIP\=iVA  zu:Mદ5ti7b9&atljIz]AHpwSR,}A S0Jk|B:o e5BZ]шw&Mt =xՋ6BlqIIMrME3U#)k]!9}/JaKLUc0+ 6&]znnAkK}.B~N>e \P\iurׯu)60ݖ%\k^^m#BWn>,Ґzg^TO&q}ig7>rNaAT\{˫u#Tugg%J:_./gR?,"792L0\~+łJZNŽ+q{+G08 u[=r,Gu:]v9u#gp|y WV?(z0+D:~PZɆh^φ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ WlbpE WPsfH/`+1\GoȆhBfepņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wl+-Pq@++P W@+z'F7lњt9VvKi9Vl$񤫨}}Oqi>A]jtl~f"{׿Mg8-N?K%T*K*יj9_FT.q\?Soq5M68Cm侺1wẍJ߾jUQ"WSTmKxLZRInGUUU1m26V7GE/U~[m!_}Ui+DDD33%<*|Sq/ ؇UzLȒCZ)5|lEn*SGW:=nsRDŽ7aH"t`29;'B~ (e?L&gވcO a?Q=ᚧ}h՞(;*2ttA+mDW8j1"Pn LWχTG ~WBuC+B돞edztC * ]Z/eLWϐLA ]BW@9v"3]=CB r(tEh;v(-3+}Pq@tE ]9z ]'z;(7,LWχieS{ƣjJYU!]0v fu,t}L?[IYaL3+O<]ww]|a B]x[qZTeOdMk`u4y>tc/IZ.J46TvjbѕSymxG?ݯM] ߳b[}Q_;oݨZ)SSezԋ%OG=?eOgCTCL=Čd1ЮRDWM ]ܯ)z]II/0?v"9ҕ~n?7+P hNWYҕE? `o \oBWe 3+##o@tE ]BW@GܻVw55,F崣U8~cLl<].^/ݓÖf ?nχ6ST]-~ֱSWX 枟#U,]T̉_絲ȦuR&aEZ)(_jbNs2 .GHЬӁy 1_nnioS|o>+%m7_dko|^6F4nJ(7̣e:Z8Rp1ZdJ$ hp?.S o~O7 778ۼVglߩ>:xq׸cwwwf~BNSEI.kONQ G'Zo]|1~W5~5ۮ)DoTb^XWwJQ?'K>TO&ޗ|6{3ꠝ\Os_T Td mKeߌTug/Syѧ($foXPAbwe>n6giFgqOu308 5[IʫȊv5.G6F-ڧRgpM}$=%:EgzH^hMu5CS ᪫Vi:.Hm %irmsA8h*Rlr=ݶҚV`mfH3jjtGɪovA)yB:;`խE魚mScgwS_Niz)e_]F@5$M)OlleblZd%Ymp?gMYzbdpuw |&>:"{mOao݆Jq5Js\.5·puN",>ǜ gET PE#/.V c)1R79M1EV}+(e6uu*TO 瞺~B9 b[%TO.I4ҧD+]-]+#x1Xnmy{ݩfgB~Mq3OtXbAb~eOq) NehF^^6sI *SHRFnʍ>ٻ[8b{=vaWf(Gf*:=1X@e>jd7ϷGv5һB˒eNK[hc K@ |ct}I55_VA+r$*E5{76&`I^@@u}[2/N\޺1 [iޑ5mKԺ :[RLL~(1ߤ^u|Pzu6g> a\ϯ*Nަjf{yŲ%^x~2o(Eo;tݽA m1í$tEMiESon2 GW8?]G]9;iE2‰TDnMS_qjL賳]oۍxwNȭ؁6axfڂOo RO𴎣'zC^(92qqஷʈb5w=?Ec w1W6Ssk+m/9qO?NG4.5~>SKIW4ҚFewmqxj!98AH$OqP}(S$MRT%.{ɥ8e3]_ݾG2{ T19oC9U 5W.&zmʀ^nB021E[\./8\,C5s8"b7AaAuQ3, 6IN Yownq?b^oRb|fys%y& !hk_BY0%[ 6r 6<3D\Jk8)Du˔3eʱRX7q '<ƯQYIYLR q4Nj0AXER2$ְvY:Fe9\?L0gחJs_P+Rc-{Y'B :E^6qf!Nor'iu)\Xd-KL:R5ūL7\5;6oϚQրeC|r~z,V y))пɒ37.L~ mrɢdJ?,_kJ'H7?9t`\~ǭdUB^)>PoVܜ$= fLw]o]Q'ss߯kLhԸ67L}ԉ>zؽ0kvc۽Ky潖cQ.`G lѤ45Y6Z]rp%KT3d d|dmiHA)rp%hB`8Xt {clrT5)ٮ@Ǻ/_Ms)fgL%*ݳb1oO/Lk;{Pm!m 12 8<8TWBȅ4GN΅4GBp!}Jsm7PzA+2ZҠvQ ldODfhlx-&JdiAwY'Z4mPYm9FvUvg]?\,YM/q̭Gr6{SqS40ff bF\x4G[#*rh7q2GIͱnR.g9گfZx*:y?pqgew1 |~w͌$6:\kbw_Q  Wp<3ۚ]1Yqh1^&-b%NPJYS 1u]ne>b08XW7Mƚ*įhsT(+ꪺ8v)0췌g TQL f͚,XU,9h*i9RCL֤Jzz|ǝO5FS_l}Lc8ˉGeOS.h-kՂ Pk`,~U}! SIy&8 4CĠS֪RIngT$d\ѽk {.8I5 t.A$'Rgy3juE<ˈ~49|K7ţd]Kg K\T𠝃LhcjLhZ6KG=G` DZ'1SP(ͺB,b)ʐJ.p=t2mo?yWQ9/|r6ږ8sR6<[n {ooogp@5#޼F ^hg>R eg@w칭[l툴i>Һy}`1C}{]{-϶ VXDC,ă0dmF+EYGņhS썇hɰII@@̽\rTiw`3C;-tLT(Gj{}#l'>qלPʸ@]j7plGtL峫MIM N V_ʪniLg.#z|4]=ߥggMFfv gh Y1GTJZm,'\ƶ+UNk2*&F/w^AoMP,b6 ëu1?5rnAGӠыb>DB|I$#"bHF/dIձ4)$#dX4.T^,@sք'P*Zjha Π,S畩( {Ƒl /~ .p_aԊc[5|EFL8F`Gptw*1V1K2j2U YcpvԳZ&53-.Aŏے"z|AJ ٜ8+(V(`(:#Gm(\ 4~/$xI3e1D0^^k9D#$=gVcV{?1h2~ 8L B8-Җ3K)[X& Y\9b99+\2{h8fy~}HzFu[gϲ$r$ܕ~:–s4ﷀwYePһ$-U6Ky(P8/yYdg=kdBm \QY[1y򩆕ompm JbRP<- :)[@1cDѫGs6ƪ8hLBаv1n^ha~2&G#7 p6$(e&D,4MVL?6bp \)D" 6jR=Mkע֮@ EJVsVpCr>eZRpƍHٳ#/Ͱf>u6Bg/Lgo׸QTKމJb~3ar(dpGoGlH7]:E40dXa@BDg %zGܣ"d-4(}VWYŭ>ʺQi*bk.~l邔kM**ͳA GDWD{ #(4qN836sc`tREncbA^U1`$O>@B٘=ZoN?뽖zSyOR%N[[P7f)8/;8>{9f#DHRAl]o>ףqZH $]NGH3j-:)W%-%qg|Ɵ90]l,{U Gran6$Ey2'wԧGzipk"O)1ON^/.?˒YMպV[=X}+\q#0ϑJl`ܣ!(?WM{4j[5M .hߡ]vV'nڭQ\/InWh:Ohf 8k9K|ɢBT֐u瀉qVh]ݢ$U<@J#Y Vr kC1E*Ѝ2j-YpeM~YC`9eHчR#L$KNzASxxu'eȳxyߙ6W1qɦ$rRZ#%[ kM5 7)luŌȱ V\PN(2GJ@Z1_㋯7g{gWz_}j't5{Bէ:fKRwvW~yJo-weGQV!yqJ (e+MQ抔.0& dp:5ʌښRA@<.6RO6Z0 '+ʛ4 L/HRM3ccpglƸ1x nzhMp}=>QdoI`ocgfq>mG9:(2k8lsL :3-RȕrjK@B MM*hp vYWrY9]"‹6}x;L'il k7&em0k{ Όm^QfVm.% e:d ,p2Z\e65)3t,Q!H,So:_cpک'W[ѯ)x(#Q3u2$@ <$2.`Ai@C)ecR"-\$B,gt } JrW2Q ♐B*묹ƈ4z;8yqV-M{ICy {^y.WzJ%Q̑9b1sYMڟEFCslN=/>/ayၩQa[|EF->Ap}aRk#`%ug .kEҪ6kJ`# St]\ ]+B%sJk!x`:CWٮZS=]!] V;7w `]+BDP1ҕv4Zj' ]!]EWV$c+qlgQKQIO7cunńJ'L_yp5/M8cwBUg ڐ[|63!&u nwZzQ {>Fn\GEz3?Iaq}8PcDzteY]_(Ϊ䑺&UTJ6}J,ؕ:ʀ~3ޏ./eB.O;L}ymMJy-ಬZ(mtR1ۃv^p]?C Dm#X?e~>t깥+N"3tEhNbOWCWJ&T"> -tE(JZt03tEpegjv"c+e]rsxWM} 5w eۮt,tE%[Pd/`mg 5"Ԧ#+]i+tEhU+BUOWGHWn "`!"ݡ+B tE(760z:rs%gF.5tE(]o]%]Qn%zp'  &%{(O9My=AZ3,JTળ^xLF,v,d:Ms<}5CJ+W~~~ P?&'yUgckZCɊj+UR\ "l3˰y#fyYOK81U*s,YKã&y3M?O2ɟ qURP)t|*`P|9jG S=CG!W#~yrl[ÓS'ЩX):DW!u+th7Feێzzq!;DWCtEpCWVFoOWCW )g"3 +B)#+l k`+kXW:vB1p]!`Π3tEpegB+BiLOWGHWZCt5uEp ]Zm+Bc+dd]!`awp%UtE(pt)Kg:BJ#+'V;gXWs<=]#]it;/ՓL c'Rrխ}ZԎ7M荈I  CnAC {uCenYDa7@o7&upퟸCt TpiT8'|Po7 L`?#}7mD);bRڝp%=dhHj>[$1't54ENlά7EhKK 0'{lUI#z)UVVRL]In\'˫~7EGX ߵ+^?ofB{f4ߋ<g,u!́7x>lS֨%՞@PHgT r_Gz >|mޓiH皌$^bާ- ٷFĭ}H곥( y7]Ns[%nq6֯>YN^#pE29*Nɏ螮gzƕ\zoCyy-M͢pZ{3l(ݗ?QC?uy. 6|m1H?Q]+Y-&ީ顟轵 j'?N(O>??esMQ*R6/1QJĞޜ!LSmédN*, 7-tI0o3@[.V%a%o96zC>xxγ~ _1~qqyw'|tyTxە;v_\mS}Q.smIK]&U(bt+Zز+u;; v浏M}vSK:~g7nfw N8}^+LW"ap%r0+Q9z\Mp/qT\q\ZsǕ\XG\p%y\A.+W֚JTұƅOǕbq坏6+j\\ãJԺ/A*`yO!+(l+tJqubo) عqp%r0 j&qu>`C\ r0kW6 *> qv~`7,[֮>t2svH0 ZΓrirڍ-Ol$?Mk즖_iT5M.!}þCY? LW"(AWW+K\3 yW"(Zg+Qi:D\Y@Q+ȵZ+Qg_ J>+b2+0!+Qqc9B` \A v\A.); DusǕG\1T 8R;]: }7X ~jɹtV|Ў(ǜ>_ϥ :=i.Mywߟ~.(价˟1y{+4*nQY[}Q^nR>oQ.?5+^[>]C z`6z[=iXCWvv+o^27ٚhec~ 7SzM};{݄[=.jOa\}rۚ BH-;WO=Yux([qQE 2{-<@/j=~r13#Iu8{3?Bϗu^AAn~ysQ.ЩZ@⟮%[2ĝK8M{59]M>GV妳x='N' |ڿ~s'ɰՈw}_j;Fr?j]}Yه~$7=TH{MA嘜55}v:)gIh";oֆkC49eܬ1.ԩXRUmRՔjMx`ԃ>}-NX4iug:PJuU*bMԱ` .U!kG9:lTKBTZK!B +ΨZ%k`DMԴA3j ]^u9E>d7EײҮ\.޿jS,\XSZ@t Y;\ ]ӔR=FIEڶ$T0176{ptPQc/9%C ^~ DF0S${^&6fUheXDe9w 0i#gΥa1Mw6{W(n]ƒYG u" y*&+6Y`3<1'9g7 xBq{nneVT˝pN5thn#= ף(]8'Hx1ёaunE;If)q$NHHI?̷ ҋJF)MnJ!%!Bb uf\ \L@FOk;i^,ˤjH 7R4Pc 37% b搃G)F 0ijA 1Ȏ ўF Cm.5뎼AFڨ/3 G UIJ# R,ܢ`^ \Tlpt[!xywy6;va0`8QJ[upyU糯ҕx3d.VN05!1їI,&ҲNJq\Zlhck +3\om\ UA*rF$ic#sVrP6P #{**ZeDPɔI,^PO+UlCD&+5IQ22A6/(\H$ XGHM餻pߕ W[a:rdQ\5im vE4Ϻr3.n%WGq3Fn̷l*b- ("2]i}\1<\ɨ[s6(sN0t`a.mRBBB)Q*)ܙ#CB6ux)9mL6v_`bUڛ*씂D.0͑6EUP4iϒ wD"=dH_(|W84ՕH!(ވ.g-u@[>ɑr=#Y̼8G%L#XH/eJYR.-CࠄfJ Y&IM4B@U+cŝ[ }FФFu&#aPdrN3RUeŬSDr|R1& PlAvNZ)Gx_t }H3Ugr]-dYs.h\n}-Mnఽ,.6#G3d{#ppi3T ҧM*m,GkdREbX|LE#'b]@EE ʃVs$rAV 2gC!`$b)`倬4/@R@e`YY'kx$ۑ< chCbQՅI,TG7>/lZDb9paPMRuY+ QH,DlM~uq~v,f1di*h&\9+BAK|wCmnb:EjC$*JPK_!fJ`vQR"Hc*2"<ѓf anK )%lKRJcEX 0 !+WBpĒ6nRÈ޲ACPl0@ "25{5v6E1I!3SiU,f_be@¨UQ;SQõjbQyԆ1gUR(3A78ˤB A1,'J[ _X-+,I*j$k4Ѐʬ$1޺;6@)J];譪*"ؾQ,e$IKX,/^n,A”|0jNRDf`zy\@t>[@|C[1KeL_-úM^f=)[Z WŃQwNd4QG]CbQ^󤑲z hPi7 =y4\6#db,k8mhBSg(E$_ZlhR)J3W"%Q&Nw۝.%$^XrHFsȇ Z٥h+%r) .Tw R`!!UЁJ$aH=aq&IB ¸"`1\ƒryr5S`NÐ% ÅLZ6Mu6__:".YpI-bJpd$| ]X Sk`_c`@u Xg; B5]za!w:Yd6/f$veBQϧӏ?n|ʙ.SI_UwQ{H,` )'+^`3%ߍ!żn0IbH;+?x>6?K7e<nޮ:/.,q60J ~!h>k Nf8fl)x6?/ /-)u g} ]%P2gp?=Аưc"@`{eB+תcqW(SFktW`LZ4.prj {Opw'><2wg~/ɌїT0Sէ.;_5.^T۝hgŇI^W8ݶwc` /)8@>a ̫3e~ڲb}fW.ݓ\9/VP,2_yJ"82"c{ Yd^-͂3 엤_][ /,=b;#k/fjs?–s_!2Xҿv(i>{o'Dn}MA3/fsǥ'*Zƣo1]At]{b3(C)hiI7NOosu:8 F2^c?KXk~1³  7|Vl/$1b{[ŋBOW#8|:сG0 yYTM/_6~J,B"\j!.4!K/E?wD?0^cP+ Zȧj%QdiqݚPp =%p jTVqK*Hrzmņ. fQN}@3TlL}4Y0fZ~aoqo0o: ËY]뚌ٳ1K U=#nbT0FTW,P[yPs}a)q87]\<>w(eyޟ' *|ϩuygW佟|œyh^s6S^\L\ ֯ow& :9'[[/8vϕpVa2]MCc|bJBAC}dlU ኔ>2&pfuLForSb#8{vJ3_L3Z _+fW- N./w_vz3Mt{OpGϣtΚpcCQd1q]3!f$D-@JghzldU ,ݦ46ᾲ*[3K|MŵUug#ZsǎqH1k7ӎ}6o9ym`wiǽLx2̀1k]e gjviךT}JMIZp2dƫi664W!Cd _ IJ sո$28>\a.#6ӏ}=hyD;Ud7ĊHT)!Dǝhr-jbI*#IGl&y¸]g3-/~Q_$KdAiH%EUlsZܑpXtB_D@s]/?lC3qncob|50*I?SVS6R8/ PGՏ/Tu湪}.iP>M]R~nP(d8۱bCN )FjGeQϘZ8ף/\S{-т\|W>Y6ƻ1K0s{̿@W"oܝc&M}GK5ۺ=X$̅.- %ʿdDgbnk 0FE:qpcCLPcɐ\&Oz|'9ru>p zb{3LfW@K) 001< h"OMVX^m:TՎ)A#YyNI&94I!e{^٪~:%݋ը҄h Śseu2IX`,!.:o降6gRlRqEcq3慰(ϹT0Vry>􀰩7+1X}Ɨaɷd (2IMWW*4MΖ,?Әo Cޣ*T⠸Ѓ Ik򋒬PU# /^=d*&xI8EYJU"mf_ƂNS.!rQRiuL0 YMVj ^da |M0"57jVHa<ȅ;eŖX4)"댉Eec;k&|1{CQaK2`c~H*bwBqJX`"9`-cL \U_X lf"%16(mx0d/9J AX! 0ϓPYf#ʓ(OZa=O HBxWOd 9VWYL`L$VYS1@㘖šT>6Ӊ=+EJp!?dQtR!^Nt,{tՕYU`CsNY$kkFd,㺃>eȾ׸X,ˎnrEp&We24Ђ6B¹ Z*$o%LdcԂL4d_ЍCƗWSXSm ?[bGPL/>eAPOؾF dh60Bqljv6~9ټ6nY\DmPEa9iɨ^|ϻ?=ݝ"1"rvTs*3Ԛ5%B($5&+rj;LLR;p!XB)O*\RBfrk8iVfԫ|<-`y{ݨ2Yj)zm\)"`IJkm#9_eKnGC"sq,pXo AĈJ7~3×!)z(RivWTW1Y \󀰐꠼G1 AFweJ؞֨%rTxGDJNcRɜR%4N+ l (O;@2>QX10E(BJ0uanQl>p#RFdp ŅT"WIba-X2/:rTk g 5FS5FR %FK8^\J|]#Y1.~Tof ;  j#] CڇQuEFևb\'X/Ӆ.cu~$Fm+9La?K]ߕ!,>ٸHWǝ SdXf颢LJTI]J(r. ʙi=O<.ޥ;ݽ-GHI:oiiHq2^>- z&hA'9O(M{mus̨u66LE͜K^h TIoy}( Fjf T9Jh\ɝQywKQ`ɷp e$sKPGa>j2[:WK/Y̙C:<Ҿ+CwҶ.$n31c풇uS. t)Բ&>ZH.HEgh՞{:FY^1T&jԓSrA%\3f>W/!{WRކ,]P@٧PdhOp&PA,[6,r%W_WSтi#}qOKIJ1%i:W3i;ɷ$y󶐙Kϻ)0Ì3^@5 b-N $)[//Hκ<Źs iCy{ *u[Xwe7C&JX!' 㝌9Zz=fIKSv<UX-zg՘ID>Uƀ(hj4a ;#W ;$eJ$_`')+PXGi 45q&D4vUidoN~ky]/>ny|3C'CU@T -oJQaS\ N n.n;굋9p˙A>u9\:oiuXB~ίbp} fRǧ 5SuDݟat0%Nf9REC0EP @6:&ӄ%Pn43YKH\[RG`+K-ȵcغmE~cZ땂V ƶDCsh]?뻇, lΡv5e-咷v>yxsMp=/\Oכ;sކA o1W7t~VyUP+;Loi; KT6n#fSerP]FDsεY2JܥiE0S#ya'ZR(AZ 4hCR6<Ą XI9"hhnV"?K,{vKMm-5cV'%*xxHgNH޸ȴ2Ш#;} Iq&*xMQxbZc2pĀa N)JQqnAqZc KATh@E*EQ+;Ju9O },! VKdEM4Å9BNH,' >ݞJ 7>6 hUҭ>%6q_ zLCJhZ+!HГ*"М-%8-MDd Q Ƒ1Mgl4 oUj\}eq9ނ2dg|_TgL8?nZ|Jrjr?lQ1]pqjdE3Q&(.앵1Df7o'Aoa̦~@/ :S耰b\V^9~>Okbj.j'j V9tsNၱkTsɭ`)$ŤcyNDR@#XVC?J7>FMNAwlV WmndCµX%M]NܺܭtO[x8a9{T/$^Rލ>v2\EJ2'뒞_uauli]vZ=yDKPPv:`^fũҨl*c"*):cB)7j^z)`p>=<$vs9IfzG]MeRڊ5 uLc=6Nl.xrf 79Ğ@Z=k2D=!.č82ͩe8˜sH+$9 ݚ^SE$]O1+%dn}pg0NӉ_+>Ŵ|>:]m;eH3XÕ Wb>j9eL!4jʈ* <UI1,l!,R`JʺGD" m@B!KN$Oƍ_°]fWuZmG!MD= u f%0'#ԅOi xz}b\sR^9'LT* +XPޖ{< &di=ѽ%'#Ǵ)LhET(E^%;ښX:jV![ӊɅdntWœOPv?}ĵI%g* fV1eFc3kF&,"~3Xh%e(v2C?*&)m$ >'H`βwEywm\ F"wDM !­(1eZ,RUM<[8}z: 9PYĪLeUۻ" ,׉i\x4t_}Q:]`UeEk?\^ géI=0wg?Q7I߫" !ͤ/M)NOQweȆ)'xnFrUoId 8@o5rOuo44kK/9z=g-vZtJ1?U@<ӼKok~3){#iL [b/pt^$5m;piJ_0%=Шp/;4+ }.L ps;ojig\1fmYf<mަtj5"#e@SɼӭbAaNDta!- Z@Uj`3Oe_CՐ6KT1y "P%JYNQZڴUa/l?L{[2v4HGyC_MGO\ 0]ko7+}źeG7v7ɷ]lvXۻ~zfyx _sH*0YUMR6VPTnƚI Yv\xxIO$05^UpߙllXs`.jls~yza﹪s ãÉ80M y׫X*xݼE jt* k/Je]areY 55 oz4k7\ĔxF&Zit*X'Wu8xH9 || SUo&q^Ce^KbRj0FyaujcJTI&QLN]Sbפ':`#RHMYd ܁r8${%jc`0cҹmm.IqW~ط'+wo+.o?m8w/_\ܽm\ _vh/ieP3waWUyq&%RD+4/Ŋ)q\~%2FORH$‡~%2`~TsSڊ#\<ݽMiN(|;1t6 ͹'Y?ӹMOFtaUd nEE7t' ~Ѵ@zM^ `2+ ]1\TЕw:B2DlFtŀЕȅ }+A@WGHWVИ]1`!s+Az] 0hWHWN9)#b޺lJz̅-^baa[l#+핝V._˰6^{8w"||"okoz\,T6 P*hu@ |j}s[ȹL.5[6(Z{?[-Hm܋zzUT̀d{jT}`b]d@IEW6oBa'[wڐ:hϏKLlo wod>wLoj3~?M3n4ՖFN?YB&ԟ֙/&r_Δ\(j*vWJ^F ]zYyQ5Qtι}8/?49&ߛ_L,L.6$ey )(6ڐ-aիƕ$}?pZDPj|Ywg{yh$|$!IܼR|R[0 د)FYxJ|;"Ecz4 h"5%tl]-D*hBN 5f8 \C(Nb'A;<(Nǣ8u|a)"Kof څC`JRוnލ><[e{4mXnLk mU2H~(ZX&QYf5u*oTbHa R RoiòD Djڏ?i2j+"\n%R}g"I>!@b3(J @xSpKFK4G[D jgd `pW=kI'hM}Th!ŭ}854)b5xYpqW?pVxGOZĪwYmr{|ޏ>I߂DĻW͉H%Z%i4+e귬ˠ{hĖ|xy7.d  `UNJ@_Aۑh6}W*7(Iǣ$@$wY $MJmh7 AՏ>XAny?>so +! hJeLOXW^0)؀+:Kddi֋K>k[ī{Kٱ|VYp1<9/iCx&gIQz_o0z0J"Vy뭵7$^V?&h /jdiad f6ʮ-ZۦIQQ8S&kPv0hw6Tq鱗QMaAUH&X|B|ERoBBgT8>B7͚6?d=I|\T>z 9Pn(gVLOZ{H]!]!298lJBWJw6 tutelFtŀχn=XT7 tut 7;4$3knQ!)_u=`j:@B6NAj\8F4ǣx@Ֆ.\т }+A@WGHWA] `糡+EВ;]1ʠc+̈p0fCW%1Y^|FtŀlJ ] ZgNW2 ;GIW=ٜzW^idCpVO=Bej(  nEBq=*Nki1.Xp{;{h\ଉ+ [B 77l:[I]IS+r+AmJP`͈ЕEȅӪt%(]#]]1` +`r+AMJPʆ^$+T6Ơӕt8ҕCmlNOA .lAFywb]]ytjΆmAA1Um>tŀc \s+At I) ǟ+kt%(qЮHv)ө]lj"ujb:>V5cƄlC}>T: 2&R5WnMM+3DY4 X|Nܠrh-_,NjhҢ!4 tտ42 Ox>:gl=UH, 0+ .pIt7pn(n;дhzӡT? ׹ЕE;]@WGHWɈ@y+>bZӕ?ҕUlZDW k] \r+At%(i0>#bCWl.t%h]AA@WGHW^UjgQ6t%pZ{] JP]!]MAJS>tpd3(h;] J?1C]ҶЕ &bZ>APj銌KNeCWg3(hiq{bQUp~a@]IۂŭI 5ҝ* 6^*?:ϼiZ:۠?V ׇtkltC{DPRzM^1#b`6t%p΅m4}K4W+V x7!dCW W4\7NW@WGHW1'ckmHڳ! d6A80΍IԒ&F0}Pv(]aS]?R]yNS+)-*.ҕRLXAAb@V&w(s.t,t䌒w[N<\ CWV+R@+A N'p%`УՐ;]y]"]YQ ='@5dn=Z޻(K"Nh8'DWdOdʣٟ]y.ҕ:F0l}]\E&h53 UvVO#+M0RiBeIdƃ f,ynI[AV$4D*E >L$E_7*o@1 j#4g  +k4fYIh}~fuEWSЂɝe%E2 zN0p%Tʣ+D9lQQjwʂQ;[Вא9jh]t(e9jI(ѕl]y ]!'7+R]+2b"V,$cTf 6[ג9@ܟ,ϿFN0N͓c (>[^ S`#h!{ʣtp+!DW zNR+D+y )tt听H*:G%]!Z`:w((tte '_$ 52FV 3T|-znRɪ0dHRe ǂH.)Ck݀Fk/JK pl9iSSJƶ5. RJzJ$DrP" #mЕ,$Oڜ< ]= ] pBt0dՊ ]y`s+Diy]+)]y+ BAtQj'JIA+א =+%5mAg+--j]ywp5#]yNWBW;HWFq!B\<\Mƻh˝/xoM9VMj# 3~-$i7#wl)( &7Lჰ iYu~n't% tyr񡙟O{rҸ5Cos/?j1?Ə|Zy9Ώϑׄd~|m;ǣyn}R1:Dw, hMx̜a JPM_2q4s,o_{1?! Rb8y-_S2Xvw7Hr3Y.*/jrV>iw a0YT\'g->FΪO} ]#+5O_&$O8mQF *C#]dl F-Y:אbq(Kq)9$嬌<1 3c}SƤ~Ԯ 3;b.`)ZLL[ zo O>Z;5c:GSDSr};GJir8[˽W)~I׋ΕyJ{Ç #TM ` +>0q4WԐ1b*ctY~It%SNKK?Sa*uZGa]~mܧ<=~{jU+Vtj9:MkZPNM\I WUP3; prM\ZuO_gi=Bk\ǍqͬCy紫x f0kcP ƵnAwBo]˭%xs[reJv QM< y@>drNH {eaxiN/c$bauHJÊR~OҬ)D0'"I\Dl|ViAЪi>4H4&%$' ^-G_l6Rl V39õk|! j4oD\3+'Fag5{wrȅ_Jԑ:y$eL&ؠ!pIGR‹'M1\JS`WQ2p)IR2ᒉ|☐by~ᒉM/V\E.HS&TGEDcĝ/- 4'*}Ʋ+h { |==)k[2z@$`+N1~Z^S P.hgoGZTB|44 x"=q,|mu89Zy\uA1E替w(x~?Q˗ћ&w#~nl62w;-^Boџ^JȞ$3tn8S̉Cx~ ”g!{C㋒34kx/G0"8,/dH;X;dwZ vi2g.rc#H^=Vcdd%fdd(:D&< Yyds/s;kG\r,c;Y 3Ul3sL1S3<;I22%I[ UT~j`lQ2GV?sg|^'MO>kd1-i5=rYIHL#K&{a ll\q_Iz6IZ/-Z ؔ@JFY*s5&ɥ #& ZfF֭g賦(0UN9x3I-DT 8%ϯc,A%мeѱ'yұ;j2n/cn^(3db$$sY?%K%O:IZS5p-/0p)9 "z (yKuF/QdrT䲂NZJdM/L@$S,r)LӳIҲeĊJ)"DjX~-+Dd_/@Y"HirA͚, Yc$A5zYOXKEgM22▎Y<'Ĕq>2?rܭ>;( z2^@NmtRw+9(ǯUP[5[].!q| )tv W_7z3{{sZa4.-?Z,ŀZ}<]T=rOD/kFcNuz\ a<)~&r! be5vhGMB0zo{9Z*b*´Xhdj;s-ֵ-Wlik0-[&8ZBf8-{w>T|"tE\6T㷯q+Ogp^j3~]*kMV%%oUNn|{tx'k:nDWs8֍f]sN@יx<2{D /l̍l h-@R }$V%yWӉJ%""(dN4 +lh@ƗO^gJ/_ ,N1YEq_ɄDlz )yuN9`UI]735irIw6j0dn$l^LrI10KR220de包ٳB@߇5cȞAK8Y/|>Prw#3"&a"}-L!L/Z$)Y˒%Eg=+dg=+z (*#euAˋm3j"H\RfK;@~j ɡr&qϺ2Or|oo.q\ =J́3cH=yOM\6n1F{w# ;$h565otS5nޒ!mֺ vƙFtHHZB-]d0^hur< >CF<E3OZ7N:Tl~_t=(D㸛u$RJH-!v}䰎}%Hc4,fgCwU$)ˢdKU),TD|VY˥Y1SPZC&0I^;<+KK2_ Z~DgT dYS$3赴f]*iiѫ B?;mo|kFd4OM?{W8^veGECoWm7A]A2ʖ˦TDf~Df"C,Cna 9N]^;5_xW]xӽmbF^jnc3vzKeM&;0s_ x61쳝L|q,= } -Ppn:hpYbDxgeziC0/& Lsn|{,g&8XK'~O{Pp#Jxip6(G.4Uo⿲aI_Yr9@f@U1~t)γIBʕ@ 1bʌi%?bnD//1uyS(j EH@/^6x2Z%tKqXjKZ~ ,FPsZ٤2k!D3// ɋ1v|G2E;;`vLɍ4*f 5 ȁƬѪ2 zT *.<^N! IߡU}_zez]O7Lާ(p z;WC/TR~íŸ]+?.dQ^Gt޷EJck"~?D[kxs,xЄtf я"Y2@]W-Xξsŕ1%g:V٦|i̵>8A>n F0\/_@>x@ Y$h(4 "-CYcv^/.)#ْ~y-q Zb3'YuZEf0!2Kݹ8:k[*:[\LuWO$2^F6<<Az[aAEH[,P:_id+E\p`XrcpEr+^,RUv[뇿(-/n~ 6oRٍ+IS*Ka 2Y%LW(Qmm=syCcCfy±#8aD2._9; \KYG8eE0AS䯶KKRDBXϒA]{ʑ{h)FwN~ dES5 hh0 )#e(PH}Q2Sh;X.W*H:SC[_En"Cgl&5}(][սDZ7fp2(fg$ȥs+do\5kW#䄱$U_+߯ԀsכtiOen>a,h@ox ?BDb񋗿P)?HCUR/:PP ܮ]3;4F#yeoQdB&qEHY71@D0n+}\܃4vתR09:Gy}cI䉹H^Ϝ̈Sl] y6PK`-ގڀ{*Cz7+@(p#_b'U9f?@׆~8zq~DkGbXИTqps25j$<4h "?C$[v[NUu-~s%+ud,ۏ(ƆɎf|}RD|[t"tGt@63'm@04mqN:O14_ٶ}o^cX j[wb-;ctUI]c{Bt OQ ]__A8Rv%V\+дT71,e'arK+5'-ԫƵ \(iru5THx5;(N4ikOit5-f2̑ÕS;<#){JXo1FÅ`m9F_p[4mIʱ6g;ʶwtW ʥʅ1!8hK1e :, 4>Y6%Qǒ#0Jt=|}ztAz+`X]@ÿ`E;]cxD3xy`O0q&R}ϥѲah#y^L_"y>Egv_+ya6(r6W(KD8V9;3V/]|jUk2 (鴢%fq7(5KpYKW@57ѪmJ HuC?bIR`CDvV 9L g -'~`uےyub?ZE6 +Bx0ǩ?87VIe Wjs!R2'p3iW  /Hn&Jl$<;{y9si51LUZ_M&×{ƅ a3KoS1\p@knת@k4;.A$ c ,VC/d܃蝲vcldhoñHE,ɖqE ʹT"G cb`3!'kh],XeVb@Hظ rR0+[ nd2BOwS@"xQ2Ɍ Ǽ2d`*g.J;j8 y{;:#ܲùs_NewSYgs4i\6ɧ4is^8r [( CYIYBge7m/^Z/dNzp0޾Mc4`ް:j)3]{$k9 hArі"x߉}01@{\<4Y0DuS2)ՊShk]G-:XWa-o*]U J.a++ MRg]I>jLn+쩜"|+,(=0ƇkM.cT]#﹬;2T9ADT({:Nf,Ё翗BFLk2a/GPV[cx]a`<dOCT}z{890hsa3ޥa8:à=1{_.{:6ZԠCD Ba§pM[7qN {<ƙۢ; \Y]˫^zvkA8m_{ly^Kj0}82LXFpzDM&mwbܰ?-O¯/T WΡ\zIk+XHV=u/ s!QY>H4'M _mЧi ?OA>4ZbE&:|oߎGVfv3]Jk *5s˟(>m,E N0[deO_O,Ø)ښNMo9P[\~1J!Qr:vaMޠ_Ml 5\q$>P1DQho1qV1ffP nAP˜P|t\ Q𘤂 = FSΆ8{=cztgBxV%`֚w6=+-"1m)<7|9 0y((ѝ$4Qezc7쭿mk[ŋJ;(tF/9,Il׽?dLc(Egz J i&J\aK sy':: ϟaK ru`ٻGr#Wf˜'A#*RG).ߗ3URJLL0АTCF#8G0ƋynDʙbơ;K6p ^맻0 r /UNYނ R'u:k,Ƴ#8g!*- zxC+fCWm-w6#ʊ !ȊmyVT]x[rmAb#N{~k BH >L>2]''lsVwVnNEZ08K` ,u ixVfM0TK.wCw3hIjTOm]dFD:cO}uF%w\pN%)⾠Zp|NcvHx{#Ny>5&BҒR}?tD0¥f`eK،V55)% ĺ4r\x3ח9|Kd\/C)mϦ½`(rꫭwɗ dw\}% -"fFOwLg 7t4|Ec_i l 5c|rU|Vs|>9]ͬ]+)د2Pu)wd/wK%h>+Ճ;hG(o{H{O>5s|M^_޵~iTz;1v)mI{I;&ZoHu68[&4N9VG$wpx9cx=j7@Ôwܛu+~F?t1VktR Ԗ{"G867o[_A r+Ft8- De nN?VΏr}x~y)q#|!攗J{ *vE i[x8z30#G8KI Rr-pRj`O_'4s ӻ6Lᗪc2I~;d 8:9:' )IfyLV"o?fY$ oSjͪhέ9ʽdD% (}KpvGA ȇpxq/k!SofBj_/CX!oQA"GG חM߀Ʃ0[E_ _ex"Jz%<ƥ3X_|oiO: 'H] VB/2iPb X=CTιr5vݿIékC5O-͒h?-䚞sl2}uS|:L2Th^jU>|:-Mos+@V~>/ch\Kp+݆lsj,2!@ae~ #'8ýU>Fd󤨬\2Q;ͳ( 9Kyq jQ 48-c2 -A g9 ԃj3yi|YX 5YZQK;R)Ww5A12]xY?ab~Q .+HsZ*ג+ۮ$X˨j m$m8)ǀId'%p]c Ƚu}{-K]V0*;V4f4ޣ,97"Ȧ64jJ+WfVl$dKn{L;!R,e4}3š^-WcFZ: ax$b _ʊhDuGt<~RbUxќV,F_ƎNʙT1n 7~Ίn07B+)ZFIlE=n*7ͳ3q|0O_D<#}dC8g \ %P/ߧ7c{_MkM~WVnJ3B`/㚙c7]"TB慕J4*8skv. '|ˍ.i} Jm~73ɦ-,NS_#-(by-Ub٫vt8SPՇe7sRV_w[^-\9c!otYDpAmlw7@Z ~7BZ11ݻs[0^3 /ĥ{ObȖٝ=/$=Щծ:hoI^3vӍ'Xzӥ堙 MFLH}{U`|YP(-mR׽dSWQ QҊ\=a~`Cz7T`P:ޘ-X"&)3?AZ{#  ^We=؀|32V݁p]*y*SQa%{wG2JD*x-Uk(N-h.C{1F<]LA݁u?z^$W;ɤ|,+kSrk V/$_F3nE De 礌cẊ0Ew_i1͇xDL}߿CQwe`֚ -(oUSJ^$',|ʺe$$+~xEouI1+gR9/ ?k~+hZWL긽| [?N\X0S>IOGI1^n`6 QEl*ݕOU_۸ %ja)'e!yx'E`U6 'L?UQ^N-eLp/'( Z]w)e˄!L?Ȃ3!-svFfUc[;(䬶<.#rGɻZR]n&G.](8cրb0)v9(8aǬJ\"6QhM"pdwZgwA8ׇt.k&\8 1eUVAZJ~rH{*>^tjrevb`!,9Q1IpOI1vIXj;؜ZC~|>^! њ|cDqaU%,].=dkٖp7c,G`;Ի@v&r-!oc\WD@rx6JجYY(\l95snp$M`eQ9:҆+(=fhdQ9PɻzJH5d=[ dI9>a`aTUUЗiQ9>_XQBoƳћ9Vő#ım A*S7zJ@$qR KD'+b|x5GĨ.~W Cڙͱ0U3'meHB6k`㓵lbU)縪ȺpKMTbDoYHo'S5st U F~Fa"6Su> RŒ12 Hn@.*d鞿jzS(LTQs>zHy8$>b9Iea0֓_ZnFrvלjObAGǘM2g Y c޸3 :ly BdTS 8 Ҙ,NINdum'|s!w jDPŰ\9JX f@yh/x/>޾(Wh,r3Jf+$z.F[EQx]uUAs_ٝ PSi LKC4*0K"4HZ9&2v{$&I`ȗܹĮڱ[Pt=)yNxkށ',[%͞lTOLM4ixxFA9kf^~ _-3 kحerqS+{f9?o7'om)D2KEkw8x7dTrп!1&bw7ecPP!*jxT:j z?ހ$!B m$IO3h+@?ttiȃt:J'X)%E) ]3R_|gCLa'L%+ـ']I) I_wvʐoE-lAi I9y%6<8$Y ჳ:ZQʼnNكM2zOE";7%M4vIXB|jewC8B_2v[-2M 9_{3ͰƜTx,{xV1c7xW 0F%JlɁ߶A#$weJFPN- v | 1<^Nw.;M >rnV4'.J dk3}]&7J⽗Qc ֦yI]HrҔɍ~ .u#y###Kq.JU}6!LP&e\ m 39Nݿ巄[&e^r~|ͯ;ںC-mQ7Ejة۩[$q /ēF-EW8vusv^;ƻp.Uzg6U2Q쀷1M9k8n.E-f'.h GjRx%bbUbv@06|_S1&6AP䜍Hǃs{k䭅UgG9pfٹ`14؊'*&9 cUCU~=x^~.xGɛ:m7謂NVTF7W6bf;w$ 2YL+zdbud;.H%I`JaeS*Q%wI`1\GC˖> ^*~;ɯVXM jsf4O_? rS\bhJҵ̤(*o? foap@R6nĕ5Ta7D@^FIv+G+u=\e ] H1h z$@XD;ؤ)߬d6c -$ Q0=X[o~8-Sv/n&K:Uܵ=#X[jqDk^6\0G:gx$Kl\Z 61 W"ϵtZkR~ZR]fwt9 -Ea,Iڟ;G _]lꌭL#ZsTbm3YZc!Rx`D"bB>ݰy:r?M3tv>~2 c^bzMIpnd}8\+H' 5V6³*cR%6:f5e2_GYtM(KN i]m ^ZQQJ+DQ'֥)jd)\San8k%9%͖Q|C@7cGf5m-bΙ'U{ը:=D {:=tX 3򫿯pȎ* *qu%CBs6d+4vAY4+S"vλbnmsB3-t0m'2QtS&,ž(5pDzx ZZ2\_)]b$M̎6^X;Jĝg-CLa'L%{`A 5G0[],axhǴB9Pb#,2\UaTܶ L䢖EI܏VC+aN+"1%5wc+bM͹7C?H;;ݕº,Y 48㓢rzGD,5F|-Qd,1XvZ*#t\tUXGq,6ǜK1PvR6|UpLFO"cwϺݱuW}6pU-.ՕԢ/Zo]%y'YGs%zfuqR kc[b`k YNZzu,Mҭy~{p Jl=sze91,².r+?%xTSz ~!r+ЉwIdG ΡW" vmΡo-* ['Z^UjeߛfNTlqNK,#$&n؎%Ѧmx|u+#=qŗ6l:bS]wZWRzϏ,/erI$.W YZo&?t vamL#mИe1~-- oma"&~OE}2`%Lv!,g{>^z FyҀ{˜?bx*Bfp?a/?A?jJ2Xޟk#f-<m+HׇjZCz?4W/~>ާ7O+n_g5}I_~[o >DЗʺ<Da6=W2Ag5YC%/r`с˅O'wP>.blt%; ^X Q3Fcĉ8V"ѮD\EL}P8F(`QhsLӀpkUpz[jH= oRHTSIW+OJf4nRc*<,a?YT;JdxmYznh4XdQX/"iu_cyMs脵)'ƴ* " m ](M,9hL _9*Tt@Ri#)A$pv&UNjB:߷YO(69epn1\8gsk$ua6Ku<ق 1@@hx)tkh˝M×~K=W:Ȍ''e`sm7.xvc.HIL nZ5vp$_p>;qRFֲʄlaq;wi Q$=JS$BKp[-E:TL+uc[;rQnIBAaDSFs 4f ܱ=D> rS߬t%6"eh Xx{.+R]MFD a&PV@AE7#)]dvL2QT Qظ>TuN}f2QʫKjopTaia%6s)sz?& 9zx>ϯO~^Ŭg?o~E/LaJM5+Kͫ1;ܮ9DӖzz==i a$ #R8 @\2zll!i'cbowEfbjQb~𝐷H iS|T̚xtQ5yj-YL?| Λz+H qFHcb#AI,fW.emig ʡ4}!I}|l4{/Y$\VYrsfib8ٙ*7hDt.*[I8ࢹiSuKP*.dE5)\kodjt۳w#w`cR4f*H#u. fnRsC|ID[ΘFD|`†rQ*+mOҒgxah_ٛU|LuDv3ݗ~.4Jk;"4x(%+FQV2hWΫp 6'OюFn)cWx"BDCH(DCjh1Ai.WUt-v+1ܛ) O;HV}bDm!~x R:YCsKY!JA6ն0C6QqLG1c6Y{(i>f V_[f+%$ e1΀j,=QTCcy,2i|gq^\I'MB7T]%OkGhOzs':-A+c)COhfIַƛrDZVܔ9{%#EŽW#WI!0jd,m1xNgӌ~6^W~vبu#x`ߠ0fK ,CV\{\Eƃ--pu%ɕYE0E{_S^\lK:ڠw]!j)cRvx^wE 9Fxq5*Z%I\1K1X_ Ie ˜KAHxF]1\N'H!'ZT;K>5Z͟ ~cE SύIv}<r֖˜g9[2"6~gKѻr[u>7B,A#sJm5ڠh.L)_Tt~ 8)E Zv7;ݩze~7U [{FMjx%PEj1MGH`<9|QW5nweąu 3rώ!*7nh |e2)ْ9A(S^Cc嚽(XZ)v:Lҽ &.ES(X -s]Ruu /~w!C{\k*NT>;6Yܽu|i5X`x緥u[2t}ޮ(3m+ۯ1 ,q XYf/6Y#p7Jn8Ft'yvAPƓT"L;oL 㣿a3#3ThN6  Of}7a:=¸ǘ wUIsNrߢKX`#;mM4?QvA?c ?Lt~0 F^dc~yF0&N4}8r~n-Rt3S%^ɮ]Ằ7t[7ᤆ/;pk֋JM+p7LIJcC}ϴhJy Ν\֌V^݂'k5rnp- qJtFW;<2gݕMZv]H ]Izq1v t:;pngmZ1nz W$QPoz^F˩sa[y8/ggήO'g6۶O")+?,;Ocrc!tpJ9;Ob0})FX5Fh%K3BиW8APD+߿_~ hmTG$ׂ; YzԾ]+Iż0YU̞}eGa4> :W@:=6Jw>yK䨱E: 2pw<<5qӳtVCuͩK=T Wl:+ $xh$?6g?_]5Q֡iN07VY~'DI:/W7U!6SJ/{ }"F8OOֲt.n2^g#:i[ 'S^Z 'E?1RFW R~tDaWT"3w e2#}[ӏS|gL%T5Lw;ys/Pkd]ʭ3^EWI9̰6vQW<7F6O3g&'b螲|H#z۩?EA54vHcF V ?~xwayXo]{?};؋.B< 8 "蠐1%6f?~r?@b۳>h>q$@̕"h k(DJ@4kͱŘ{RE.PCt.E &ì")iXAiG:o2nOp+Y\MT;8!R7YS 3xɣ.X`1&[qj:LlF[,)J1M,KZq ŤI;dQG $H %!l08zFҔ=:9gS :?ISRCo-噔8_PRdE'ⳙrj ٔ(rf=I NDSyS qԎmy j8q~'L/D7E54vu3'UE54aVwX ΂|[XߌVrMA ]"=h۷PN Oh vp/D+OX 7=f43,t۵B!e_rs`_\mBc 59Wr*Lm#c!fANDA"J G XZJޓyPJ׼ E_BaFҦՂG:.laVDܢDWNwjii~,HA^tL ]  +ZMpʛ$2K<96)8U:h,AX`c*K aǪJ3^Nu6Od'B@e}Uy5O>Q{S糓%^ojk1%^Z< 6ݎ{kP`<7lC ]}N)?qY`<\x|/gv\Nr~~9cÖw#ZduhGbp l1]2vFg  }MfKXn/Y~Yޭ~goK*kka !`0B9G{3,T~۸3.'޵bewN]H{ bacҶcf0}Y˲>ґ'Q#:UE~"u&fCLs|} &.lH]m'FIc,Vk_^$Uwx%z@p8TCk77.8 5ߊ1{k#0=pv0@>` 19yPb$T#bBխ5`m>Zer;bb85dr:sws/B9Zfdg!( w]Wc>OڙjKf섎kLr<)cPTCx#Fz$ynk'v 5C*bDor)l&đTۘc|÷B~t0 >U̚q%,9|L{8h6hL* 3{#l~6ZG@_\D%)&%#=:耆dOR@TI#\ Npu= ìŸQ=jW>xD`P`]?_5:Iיx\"Ew-Waɱ'jهSRx^1*Z1]Lx2 r ;g҈zl|e%Mx8qשf o<-]?`'(y=URn3dTE,vsc ,bףŃS`con[@[FV`õ>v ^Ykj=#aH= tq1nۖl!kklم :(kAA\1g_%12@B=/4BO?QDwT&q})_P}!Ec㨆\S*>sCv pq^ .URAtq>m{%o{%z2is-:IvtHlikV\ T_!U|LQmĔ; z>ڨ:-(JS:ׁR5.eւa\&Lț?^䕻P{gq-OGe;*dg  .]q{^]?nuuI}:qɌm0?)yXoz3E,5_̹yb8$>܃j _(^U3婜6hLVRsxzo Esk׸{oٽ%ńfS\ R.BjlկX2 t>E1*JDթ͒P]S,6"Թ!' *[ǫe'B1H~x5.LV?^zGmGSOErC7xd\z c1TgB1zo6PI%q%bq6!6&bשUkE{%Re\]U);0NP 04DQS3Q9$7G^ø,uX.|?Y7mW퓐oPGdٴU&z5 <;\ (6+זC+֫hĽ9Z Ae!XuՓó1o3 nd?ܬ7*z*6eBE NQB^V>N#SOqh+ zV ¯^./mD& qK69`\wV:B  *"1R Pbj9 FfP@9_y;T3HvW㶠^0V;yKہvqUq7/I3;[3HޥuZoz瘝[o⯒&Ǝ9~~N:BVӲ*NhIhlmj6 aFtkVvA.lԭl51>HMɤXk(&r`v;P8g Sȭ% =*4u` N{}[lLdʴ]n ZYU^yYT5o'u/9 8f0JO 1ņ0Ov/$|8ʳq~6(d.pg?ȹ.;[骗4iuJE$uݍZt3NT+ 9 U XJ* *HEi%e.UvdR(W b P|ĉ2[&v#)7)~Uʗ8Td:L>^]} 285瘤ٗ9n+/=-#";^ Rt*_}A99|gcn$߳qjT&R흢r2`JQrJtsփ<~i?42pǁ"?貪r4qv9L zqxٻ8vWD7dy}X `vNllj/je8vNtTE~OevO6,iwaN_w$R#Yw'9>g[Tθ4s 4<љM )㾬CVۮl֋aHPo? 9ٲ`u]v}y;o0Q#A}@7}J' d3a}À3a7I?9{ lGm~Ԍg=}޷BaQs4 Odġ,L.a4hQj2+7SU :TZL֠ɅgA;ϧ>hR|wņcD#^]b9><[ Ol,EreS=ݎNgY"Q ܙckt E65!X&Ek(2&a.%Piwݞi=6ѽe?J t"Gwls}ՔTG^^9CRH1Mz~ϛ\L.˛PJ8R` ɟ6 4iJx C  ɜ6o֠s'Ot2'm"6O1G&sGZ(?H/V2 -eLHtd5|LIs)SͩP-@hv->hRh9kwjЂD 8,<9 BbkgCv39 Z,li`3`}4p^Z]j4VZlv֎S>A;IK嘪YD; khlH)( #˴"ۂ,<TfXK `0m1AcOL[>WB 9oۄG:[_]r>3dTߵwSR2b%q4rA)`-v$uކ9 u= 3"72 le񜄚P7T@=\9T)>bx!0SuAҢ,P}C4-@a7b 2y+׆erΝ52Tag45AԇJwS2nP RvNcGe q=(o1}ԓn EfSͦeXzfn35oî4gN?xt~ wN_-G9W/]3Z}b%K`:x&??RĩļkQx&v݁Anxԙ lԓooBBIwBgȐ߭z3ESrcůYwd徸ȸwԞe3qc3Ɠa$ $r&1űon1}_j}*׏燭s **dž0VpbAp5Ϻ@sk$MKH|;At_w<}ŅoOyo *^q/>[Μ|7޿I_-/Vtq$8ZiBmYNUkӰ:oBn!7Eo>/_׷;$u_kHrCTZ]=Wdt$/U?/~~(]~%. _]GYbcXLu{^ fᡁ98-۪֫A5nӠuAAum;+Хo:H(PôӶ AD8+ƟOX%kvlrGc}Ȃ qfD!Z)Yr΃&,HaK?h^:v-i1W#:7C3g, 9%woNt 9wQ7?/\(-ͅ<0Hcg֐7zWz%ȱ{ 5 U*v+= ̊|>>EŤܢaOG|?=Zbғ8rKAhO/isF1ewv2l "{u7}xN?Z&S<BgR 2E_iUFN$ɪѪ[.UUAn`bH 8l릝(ǐ9ꒌ;拻yZ]dIm^]7B7~(.p6oQm ,E+T"H r_SqGLtC' gK6"EH}B&2@25a|ٚ'.߾.V: &+&$[Ě)УKL9Yg6Uv|On_.7@ j`c&O7[N^YkBh+Kﵷއe'X/R0”-sJ#~.;'ٻފV|շ⫾_ dTJ- 7bc*)f/[Z,Po&g:dd)7XSTv)F-0<~\qSM:7ܔjM '}V6?P NS,fRM~w|7"[ɺ& ÑrOygor)ȑ{"j M&hy,VڞT7lL"O4RG =Gpb}c)l3a (ۼ*%W/H^W%)h%^%x:ǚ)ո+blMڼKI(uM%2"KA̷`uEرD #1Jȥ ILh}c3pz:p3JXKS%^*yv-TNQHԜGI3@iYaM+pΔ4ڌWM?ZS͌ӖC쿵 w5HUFE[*lTΉ+;9T[rI{ 45j z\"|$\KI?VLpO^"E(Mc m|&/Wn.zF5ԲĹ'4E3Ꮸ.jrPB ϰݛudy!ZÈCi/-e4D r<ձlc8SRN%e}pev>#b4 %ge+8NRH,*P"C2 'U..PIɫPדMyFSV kAW=,5@O嫨*ZnCŊD[pEAln\'ɳ+>;ły=S[%s];$2j#`a$W|Ss-= rAq[Rm. 0eLa.+vU)1p% jŖOxq稻B kD(F WDr}5-߇%(5wl忹^m-ۉpl6g_25R%x^(ay>}uI痿tOg:$x/wJ]&5NtXfcya{H Mڍi=U v{u߃|֚߹{7: \%Q(gu| o`3,xf$*º%S9Pe-fΤ\NuY˅泃P$h"^7c%0ryJ9P%enbi%j(d6E|DՃ D1 s+;fN%4s:f>( 3@M' 6Ko3ld 3TȲphrWyO6$?$u}>jxnuA8Sf+Ysc)pƅE*`Y[ez⫹jܾ ~Υ/Vĥ,oqyLǼT0/l7},R^a=WHDփtF&u oqNL i{me'SܲC2^kw.n{ $M&.r9Pr{{˘E@& AqkI>=t D`{,?tt?*ó%6_I MUFo3'E[BѐzB/5j%1;j4fɤȔBLj͞c-5L٠'#-zf*L,_>l8 0Nk*VZβnUI&衰 (.(:EVY,R8璟 9+vz!cs .W(7塉 *l07Hw.Њ̜R6Aa-RV ]1u(0Iҹg Z-t1$NN&WεFMet.Z稜ES4hNb>ll[)RΙȘ4e*68{)ؔnald8HltG yV#&U!iG~p.K!3O {4}c1p@ZϏ7 3xSlkɅj-QcS(xN"4lr9xfbC!I#@ %UƂJ[B-@B8E?dU= $o1/6Jr VZVťPrywfekIT,AL!$d>rc٣L1޵5q迒- Sujy5xرwuzHv=ZM> +1|F*Z>j FSa˯ri1գkJ! irgȆ6d{WO#f)(8&oݹoG`"~YU$ F&Y-/kcALJ_7ANm^ONØɩCL`ţRg4<ZnV8OW2w!4ԇ ̫quck1:4'fO$(B5["ѡb( ȣ5-?LX|lz1E: a%341' -@X*2(ZWqٛWZ3_y:N1f%>kA.{[Sk37Lg42|]bc@Igp?v+8R>Nz Y$ ad@,l n& GA٧d|%?Лjhs# +A[ [{ XSe¼(\]6.nO_K!TuS1kՉuZl[~xN5g*MM[! PVfbr*UD1C9+讐{C>oN]ס/8TWG1jt&"BPί!k,0cV1bg[RDx Y#s!Hfu2(@Ƒxէٜ=ڣgA>jDQiTT2d GFY%c/"llyC`rH4ܪJ:F:1Nvfhib{ ݪ@ /NhYtXyUtܯ Z=XyWEYqF4ZӒf&OiF?O\2gP 48 ֜j:]JQ֌: _bT1tdDE*贊=;fjeEZiM(Ff$E892phՠwg8 G <3xpl2H;M5"]gr9\N] y:;i6{\ 3'mtXzB š/̣l*C ~֏LOHNoab STEWfF ەJ-GTخۘT0םr]ceܮoW—[i հ]Ń ϩ0ڌq/<&In%M@P{fDR_T%Z>9pi'S>UR-h?F d(@R}a蟮z%4+J~'eb旟Ri#.է>]2~ݿ/oɩoP~os>eӎ=e~ޛYe6:O?2YmVN\vtr}KYnRyt?] p $*<[#޲S)hXq䡼|8/<i3Evrty%ȥg20EҤ?ztrxyNiRngY?,G~;xؤ^H|$1&i;xZohKl ["gI)&hH!ɖ_1}4ov1LmĤ,iID$i٠ݢ}@pv?8eYA;#]*8yVlTpzeh7y|ޤB@9YF#lnv XA2&g/2),Qc/u+OQU<̑bfTՌYLj-({:D+I&ji5+ }}F16l\(1yCMkaXR#Φ$ ӫ ǡ+ņP|B\=kK(LJ_4$JNØi^àx7c̾%E!Z8nO$Vas&}*5:d߾\yq74 Awo^=)RGװyZ LfᑏbfmFB67W5k H%S(B`@O {򭮔Y\KF oo݇O[*9WL% wM^+q0^nfFa`,a 4D4ۨdieahzB[]'ܸTЫ8 J ^#x^ş2 6>fgrGcd K 13ma9 [F>ccUfrX̧j[Z)Y>0eVX&;B-R =HRZ%!;֗֐&:3S^1|n=xZ5a&޿?�H2$hAak y# !X<<4 n3yh-P<j5 !>-7I(mX31*ȲSdj{H 騾KoA'6gmI2m16_!XYrmdUBUgy[ܢ@C-բ|'aιڈK[\$Sq#+a]/!xs< f7(^?&[~Ք%)dIVgLQ$^GWd6W(6xg G<1:%9D[-+ hY ]I+𾉳 dMK^tyXp=nc9;5̰6W̵W[(8¹L1z0 [_#an(b3Y$[?u>+UfZŀe 1v΍ANunOk>`R:7R{ETܚZh'չgsK!չXցu#tjs[jhQ:h䐅y4}Vv׃FmRm8\M> CNe3`Q\,6mھvORkcrR;bvIr[ZZ+;e/y3i=j&BX.˾=g pOT(8eB?;!vInUg=g1h s6~ZٸЏ^b.T 6&f_SB?3$5͖$\MmEFM86Rq$g&gh=a  :K9QjCzFjfeQ/m,Aq+Adרժ6cӎ-թ'$-GW 9ُW9P~vja+vX+V 7_\-M[J(-r$%4ͤx<[4Z}\<µU+T }C>[Z*-^Jim{k~ɛ rqC\H&"&@b_!{$="cLP`Ph\f-!]eV ;-3,"&D0ǩM"ccvqrR`t-V̅jH+cP^KM SO5jhe~͛LZ [E#% apJ}Zq~|]L.K RM KՌMREΤTzMIg*I U(_f׼"7(&=RZ25"m~\7%@D¯ g XT S1%y/O!=64o2yJ~ 6E@RIY-wvj-~#cm80^qM.f&Uꏺh)U<˅M&hTKnS?OnGL !;pX`B%d]}Cwdbx~Hh`iYA %hnE76ux\$s%!3i%6|`5I^B6,CJV9: Lo\ [HvŞY<,47>޷i F2C?#Z,wH"32ъR"4W$>\Y}9XDY@A0[:/988Inq.j"']W\޹4f\3h.M/9 ҬD;Ah#\^/TZ.0`:$b#b Lw"II `* % 9T` Y9h8CNl8XbVLZ˵Ih1S¨4dv4ayJ<3sᢕ1Θ %á/qbnoK7wr\y6X2e֏{RJ'i̭ ՟lcb:T2#wzi}AI; CF%!Ai.#xF6H@z _9)|jQG7q$WT?VzɛO7VlN,v{|yͭ`MltֲYfwl%W^|. *.OV.6V&%9/MyKlo?77~_Ow۰Ec|GE+>]}I[ 89.DƸy!@{o}gƫu;Y.nvq"[4[f=@3ʮ ^bkDEƢgv)^]ݹEVl˚uioE4M;+պi)l aP弿hf-|u"V_Jxl&Nqb2` JLضXdB!*lS-ܕmJ'C܏7gycYR!p+6y)Nhl\/˩FeR"{;zGda65l:"}C^3f͆rpeQdfYId !K( dyVpb2KqiKDFn[e?bBl/مJ'|"L K4Qeï~IZZw.z!,<;pVOw|jgC^3ӛ=TuH6b;.b5ik4B2t;#vh^)0ToX={ ; yŞr%BfPY`9zE{Ta10(@l!0Wa>Hlo5QԅQ-x+)scH!!?Ovguv8-/zoDC^%BEG3![-T^ǿ}蝩^ 3~~QihH|'}bkލon'SȪcO;w{wH4r_݈dВR7bG?bC{ɸxOrE|Qi`g58v>ڡcQFXع3>-C O.Un'$WJۦm}S\Y{m}gii~g⃐z^mu~7P4aXjc?.WPUTo=m 9~n\XwD:n$8^X*F>Rʻd2OT'ݞ7ȗ{_ywZA_F3m{ZY+ Q%;0턄ak IAFFqDyp^p3D(Trh 6w+Q(ȽP0EӶ0Bj>V1Sө ( T>V4)i]]JZպץ#. 1m Ĭ]ʹ,8 ?|g7LpMe/7qT6>)qTEi6tǴ ~FT MVV^ RTx=(m\y"lӬ1?|=ȴ_<T-Qk>>>j-k9jce'aJ L8oݦNJZ#d 9z[R@0րthQT>gqr1ݤzmMoԶve=Ad^}Hg:& #K6jFѿu/ǣ|j=x4u__[ W魿䞮zL6P:j}c v&1uAnPyT=2 t0!=8̍E!e^ڣPXsC4(1K5ڰ i^kSVH+ק={uUcKut#z;<:z+teZbe ^29DܽnUBa.8M\Q 䥳S1k?$.)>v%5'AdZxRD];)(KMgq\_Ƞ8&{t8TP}6\tib=M ٢wAx{ 6A0<2SJ`u@hĘ)p$[1H8W%LOasZ{hS!(ԗp5ɛIlx<֣Yr ([zkpߌ^i!`K3a?|W>f|)%,4a?oVղ֐AKPa߰424MA ˯7Ra6zx*m, ĵ"N_E!ss4(3~3Y,9.\KV]WJB,E^5\w:1iڽ`rLcRH,ȒXLC  BOۧkFIXmZԽݸnܷQ2Ct1K^Ҭv"V(Zlnm˯'b ۶ jy؏,}5 8pS   ._&iSO[ _/`BF`aE xb"珛@jvUScSm,#乃YrOx;8=^ qT48FxirQ\[qNrLR9SA)z/d3Fsnz*ɒa!P*49AUvmz#",ـ!!2I`Mľ@VA4ԩC@pKu/^'A,#"S"]LѤqMㆫqÕҸ MiLc\vаm//y0Ih56Vb̸RFqiD PbC1%DQHhB>t=g؁<_.'?LJWsvT]5S5tshpF&E65b0r OΖ}T(1 *φS]i>(:itr!sU5H[9SA?xS&kPy嚛 EwX^yuMg̽Y_N~ ֯Φ"mwW"JKY"g}ἵ/־(zk P$}!MD M €- gD1.[kc$TF'ʻEU*@QcLH8O&MMyS{ui]j@[`"/fDbpy-{_ & fa\KF *g ZK| !ْyaze؈#WˉJɀ|8m 47U4bwJTҔc9%x҈-% g㷑BIDd ) Ă: XCm!UE[Euza_'I*u⃢|8V_"JD.0-i#R21,$+Q6 +KʯD ǰE"!z^dc Ax!G^;ʸ|^oy(f Gns\jE3T_pO bIPZiĹ HPiX$dB c$Q+ɐ0{ORHD)сt9^ mZQ B"#wʒj,"$d C.VYAhahQŒR 2Yalt%U1%?5S)72Cah˱\HEG*4k 3J&1b Cs0ZpVu{PLOB*O`Ċ0gBHrĊNlUaBQH(*R1Ҋa~4'5[si6ͩ2?*A(cd$,XX"\6F9k !- K$X e A`QE<{ɥgm=Hpu/<[= 9ފ@DpjEwᲛ)dҚ6)  *ީ'K ~t q3uV^0anE|\V.ʅsYp.+EbėBFIHQaBa a)% sQPF!U_BB;*E|U} o>%H:;%ޙ))C)Uz_G\t3՞rd b=1yP2,+ ̋tسAcy2B5l/\~5}In{Lq!hC F':Mt8Nt^CiƨEl&$i;swMdNV,XɓeO 0 V(W|{Hȩ<4l$ Cli8YS1kuvW@[%ÛAoh@c'2YE|;{( ڋSi<x BQHoŹRrX 'Al|! #G qg)Fq(+Q" # V/LjoyijC} }uC]|i5_kZ׺&] B*(>F+(sm8EJ@κXObEb"j KV&VJЄ" )3!sR >'e+8FTJT:m.aa,ApZX$%qLEDў T@uл A#0OX a%$huvkl;J<~T`sVu"T|oC/ +jG&#<T#0FhG I8JETRjfUpvcqZǽ`cS1QDaLVD)86& @8HJŏGrG-ՙ0k^}@b;7m$pip>6Xjy?˂7?TXx Κ/~0㭙a GkƯ7>{P⭓_[?l8?zZ9~:c0X}['zW5>=;J4|[피 m5@ .brAw:tn6ess M^6 s3v<)1T~ZApc^_2c4W@_ ދBc~va5B{* ֺGNxsy|ٽnob9fkԋuڂ!j_*zE9Îc~3n,`/v4J4;h@*b?ϑ^m$w} Ҵ{9xByp|[l $` ij5&{}>`B >AC/w>IO3Avc<NȴJ%be(,W+)![ԗ< B$<4 G!38Ĩ8Q wVi}mq3QGRf7el>R(J3WCp^Gz/a5\o@ % pGc&` ֌&tN( ";l43;4 @tmi@  )(^ixFpuulK |^>Uor{s) P@kEݹ ʏӿgѹ{8S.L fuw$g=Qf'ȯˣk㮹 [ӥOmD_ 9#AluT%Z>s`Q P͝5 C 74 *X4?Gi0б&g]GzNtnsj?_Yٖȯdq2=휍gw`=zM>V# snmOsX܍?^.%Ll?>\uťt7^#ӻ_L^:݀/~~ ߋ3>syzR{=|: *} [oÑ Πۛ}0y87o=Թ+Zҟ̤#Cd|ԩkv`jU3);Д>ldMvwaq 6Wdoٙw>aM^'C {wcn?0e_悭ʍ/ȄSFT"7o>(ލ /mۑGj" ],Eb~KFBof.6^zÛ^xsW({2dW4I}AsIwŸ?80LY8ցǏ77>p|o0P,'os!.oA&\;-|~&؊v3k s~9}.hFU3'wzL.z6lAe$ˀQF`D ck~R͹srf,.|9T"KnJΞ4[|)r1y?k4UNү q2Ozx)w~m6,p? }Y1g!BK]tG蒍.yD(jtFltɍTITOK$TBzF)o0OS`4ԓ4%J4sLI>$9)🟟~0Z})Z`-Wq=hB1A Mf}Ʋ٬o ?Ѹ11Jm\_6mVH a%&0pH#j)~ VEX SϘ{KH(}ZOI=R>*]ޑ6dK.9\%]%7wzp$6蒆kp$a2`D!(^ H?1]RJ*IK%71B ъ ?JA%1j?]%w%ӾFltF,!TI,Bڕ M)V `Q ZUͶ w;p9vY%gh e- &jfRysSNk|OoRDpkmHЗ`^2~ &Ko" %% gOUWGß~ W 5O8%bgNJ]DYp`bY|`D[BQsK^ )s|/Ƚxտ_/ŋ̋_H/L|6/^ւX/?ѨK?XOo#"z' }MahAO. ,Ꮥ,c̗˂ ڹ|?QTcnT_c֮|[.):CY'<%j<8vpXw7b>x>5s\^yv?%A I"RLԄĒD6XͽH$<<%N ^5=qдMq?I:.k ۚChGU _o<{!`Lq4qru@5.!%}>&Z@+b,SZ1s3 qyiOD+3)wDrѡE !"] ЛImQW;NJ'"$?;`hT-R;|R _()~*4yfCir*G9`J0 04Qz,5bG } .)%J"&H@eup)f :T;FPA}75!Į[X)SdMlŒV4E(jeJJ&#z u. i[ EMY"DuӎJ>V‘Pq2+d01V lEyFS[FEk%Ջv񢥅EA0k(s^3x>V kWًPͮiӣ-m ~}pW4g'+ ^c.F k>r53TN3 S̥|1VJ?s3Ck>alY${ɩPO^5"ŴP'9DZ65z{%٩q&q f%bWvM)V#Δ\J rŹ˶j@គQ* _U#SFwmjJ"|rЙm>9hA0"[>8#qF<|Y#^'ɍd18<RJS&ez$ƒ#}zDpQ(qحM/ɪJYMRMrb] E_'ddUYŒ Ԅ ƾ*$d\S׍`&ԬxȂDр\zD6oM%x PO;[]K#i?{å_k zNʈ:\AKx\\=r]_Y^[/w>FYš YBu]77]7MC'q<}|m ^}Go._\/~0Lϋ𽽼+; jsLGg+ I?6ֳ4+ҷCDy&1?Y\ŖS8Mbad=_ūn&7Fԍ9+#gzaq72dA5׃ORkKˢ5i*]95\ z>NY}UZ4re48O.qHۆwvnOv,; _S_WQ_T?Lގxe6" X|*18HU&&K)9ɱ 3ِ#OJ%%|[휥9K+tNbɬ9>QPbTԡY*T7J嫝~"0f"xQŢq䙱֠G1yw| T2EEM($ ?0cÕ#&AE13],JމB R{% u%Ǽrܪ>D nBx*pM^ dj? -(ōe\}DyjӖ)I!dIMH nrSPTڀ>`̪!Ȱ!vAyx3ƥV* +H/R#̈́E*"zpZ&֠ ?.@å#[GBZW΁3PC{Z>iŕv~ 1i7vّ_ oN?sɦDm:>Viam|HäPQjpM7.M@$M@(4C5ʢ` fiT%H$r;JFi+7)Gm=q{gVڨђ}G[7i,1S} HiP-tʢt d#Ծ@WsJ:BP-wY#bU0BQ@ܑ"jP ek|ក_1y'Pg;yVǯ@{Sc^],|- %̏Euy`gk=fhёWtyTqĐ#ymuuc@Y(`{> ,xS(  v\ U4svޥ<:H!j7%\3?6ʳQҚwCYMlFxK} N`Q{BHjGVCŶI(@!.hK-b3݄"!U;"͡vB~ \ґTp%2k5|*uફDchW6zG/Z|\cÞvIoQwgg;Fd/-A` ! J~%n,І+jLp9r, Tǥ+Bd%!%md@#/Y$b OK(~[,#OI"tgXi;NTyleU?/[q#cIvMd7{}K_I<'4K=OgE:."D%M:;a08{ݾFe`Ҋ ϑr8!:֧ML \^rm?j -G4KIr#XDv3r)ˡR"%A'pQT /{xu8zv:v=;zY׷˞i֬fżC8:q!^ fEU6;0-{?];cj/LX3nOM~,nr4e xoE8_^ ryNXg19EuT|SMNjo m{Eˈ 0a}/'Sx?'ʡyʛ5~t8'-T"I%k񘿯\ GPY ޵mQSiHDZhaQJaq"QnVr@drg=V9J=@tXάKLcme"aQhGRkaוtORbT3t2N!&))<0΄b)o%G3#" \$HY^IR W!eyRMRq/k3cr?!)ɡa&Z&#Rsk nN@dBFZF҂iI)*U}+91֘ P#(Jwp"Y8YetL]t ss 4WUO(Wtd^  "5n/zh5o\5\?ĐTF_Pݡ(8ǹ4u¸Q@%m˞>RճwP贞9Zsn\F9zF.^l2[??;\8{W^|7I|y DyU\ټpsY/4D;\v<En#j ^d$JgOiT~ H߻r+_zt|<ۃ|뎣y|q\mf}gI4|ū#>Wq\,g8%_nCȖ%ZdAShE6IHFRM~1jƱJc̏b$O>)&XV #j2%'Jݙ=~YaP[' y-fR^DmStMb4ڶh3e ),qDfbj ~LkfM:4)Q1sRƂ(ǝ[bbMF[;2H,D3H@d?9h@wy| Fc9` HR& rb(tW@Oeo3A\ X@ƈj""bdBi#b$dE ź'R Ƀ5a2羂wS@^> A vnVO?/һM.'V@KbŷaЫ\W^?~@2,OsoWz}pN30~i2X 'ЙDPz.e(x\%ץqgs]\[Iw/z뫃+aGԸz&(a'q;:n( T''1:1p)"V3\TYFA_q8(bNYW& qDQk5HLP\mLDF5K8&(X8%HFPClbuE\ئ* 8_IϩUũKMc] W!O N&2KE8sEIu 15RP$#fVq]ĵ@c0 3a@qZ.Pg>W +N^;Դ3y-&S5A1/Ր6V|[Ɗ}*]9:/Wɋ'1h^Oe ̚inwHs@6eU@ XÅK(?V{Q]9W U"8cP5qjhuu?~7=n5X:che7[bUfU8!V$r!8II5ؔQe# dłܪBaE**M0*Guϻs g|x=PI| Ԫ*$Z1$[$k.$T׃$J )T{B@)$ONLz'BՕbZ])XM BA1$`vp֧ՎvZ:M Cͷiy| CɷOL!aUҎ*Qa2 De6g/Ed%ɧ=Őx6}zC Ƴ&6Ti} ÷HN֡0Aȃx^ɀ>A0(!pu猀d}cm?^sQv<}ƢCD,O}5Nc$RCh:A ZpZx 0NUԩ>zאQ0&c2Yzd W09AbcsdٱՐ1`@?Of]u(Ͽw]ؽ Z|א!0`fx=$Wm;}.EIyՐ1lnldrgK4w*Q!hRX/W )ĹBt1ɾl|Z$ I #XHNDZ;N]I2 p5<) f] SKbW),8i]FhE&.R]\ϸH/ `R] 1 #zxr_ msAܗٌ5mԄԄ&0!Q>? xQѯ \#bWwED^ѡJ BIhsJRAҋys#OVy[o+~[oSּF\'/Vm +TI9oլj9n. BZt͆/w[Udp!pzAh7faY{sh=͸e| ,MCXb ieףajk+ΘB}VHW^Cz]>gGrZӵݵĉRf QPْ7[f;H F y> 뫃[>:/TCs햦~TYj.YeЙWހ#UFm {msyP |}߀#-dja뱄ßηZj,}ZFZ00 ss vF5g%ٳ{ v %e.i-#.95!­Kfq܎㵃"jh:%:]j1/'.>ԙ 1KQ0hRCY UHWzg H~ͭ(z4:kឧQٝMB5.%bXSj+^ZS:+ZWUfKbs-kP;YɢtuRВ=qNp$}f~.[]0*swɵʐ]f3Q%iթTR=dDpf.9AD yn 39f=QS:.AQpC)Mi8EuIIl3y뒺1J@N%y%Jw%YU@E.P&e^ *U<&S 'wy<=Gk?"^CxF:qFv\6| }߿z55&(JDdTj!ce*O4$f!H4֒fFɗ@΍v;Vi EQUQ QOgrpO}VQݏhNL\~zpEbq{Qw: IGf9&%g;{&ʩ)~nL蛻5O ;#wzejj@ȋ{-?}dBecLG4ѦTYqNيAg(r \gJN}HW(kkCY6OiZr;O:ͷo~zs5Z6B{?$%8J8-7! +Aq~9Áρk%5U|\2e ް7w W,Fu L/"L ?_3z+Tԋj)*+-FF6 {f׻-ݣۿ0v9c3]y`- MJ*בe8i ]\c[ Gy ʥ^|7I|y^I7wL<'P ang2aYtq] oȒ+BXr߇Ay#s20DY o5IKe"u!`I?v3 ҽRtnCS!A| 8סGu"U ?܌eLX߁f;,v1?-f Ð3B;Vj K/یCڿFª+9#c<(f)$c1$`)xl%qHI{y!{.hw&zO)B'yņ?u!!F ;[׆JL_3D.~6臉 t^X&(1ਂC V pS%6J*&QDD`"(=?zg׏d<:v!Ա s)XI19D*\r-"$"beIK>2NÊYE=6;bnK=[h*l#1=#, F^9dK쩠x>0s ïwʚo`9+{xY.sן=xjbKfQ,G95ja4Mo?rj3>fQ"']r[&r3)( %.A<:H\ꓶB.yTUjw(%)BmwƋd1~}'5֤a"zc:]*vvvXRk|H x1 U!n4v_[!I(}ė k^x*qƽIb8O#^C/XH[M%E]2e7r2 N1wi⍍^ & uD]$)Ř!l8^8"E(v< Ǡ׾ ܴ!{0OթUj*ZQy0䋡(XڇY՛*NxtYf'+Fx5mJouVkԃfunR^mOPZ7Tp 4M9ȯ_!:ɡ"K4D> ~ %+#pۈbg"”8E1aS1Qb)B zƒ\܆ҧWj zrU1$`,s8&'U_ބ?!NoJ:LPo6X؍Sx4{o_G}Ndi"$ @c1#p USwބJIRSuw3 b타|vvoSH_GeA.>3C@oz35&-8Y_iSNMKab=*@Հ%!\uLD O4 td02bZI0L# ' _t0=f20 @]K"Y_^&G~FE?}06\֚o^Nf(׆,{! Wmyͫ0Wq<3ëTûYDdK4dymHـXl9\lP#raDJ>X)QIg<KDy41006DER€#>q=/8ZpjU<}R8mGH V+9)&g(q(s\*I-&.TEYY]x gJk HBh|7i:= %FHP\kU(!ZJ%YW ~ejZY/ 網mPN:ԢHz13< M]yT7d>BÅ$` w-=wqvk0>e@tݬPA횘3:-H^r < =d"IW/PIuHȯpiɆzqN[ظA$%;hETߥߕg%)*޹]vl֛28eۓSujK5=*av O La:ܻaf eJ] "Uh3B_5ڽ!XU۔3yœ[ئؖ{o[7 ՝tf xJ81d}/Ö[˖*͋Jd SOΧ1OMJlk}? +(U9pjsgQ<64: ֈF.]sqLtGHౕYHIJ\}J5jWԫ ͢V7o@!TG#d+@1f8@)ZڅG5,VuzZ00U׵|k׶UY:0yg90#"Z cdX4@%bk7(4|f[)/Oʪf]gt!ĉE];شfsS8kr~f/.]Jm1*xlש#¢B6V ngɥ$b?1TғiڄnMHW.ud&_jLϨbPFtQE/*z:߭6e[h-s[-r ڭ.eDnUQBL&vkBBr%SKqnk7).BŠmv;cxOjZV5!!_֑)uhdz¹-fbPFtQEs~K8vMhYք|"z`rܞmמmמmמmwgە7Z1 vnӥi6zLbO'JN*At5R+\I4+YQ%誜XԻ`]8k<x>M8!aN]t߼{۫Kfw?~Cs4>^K:pwX$ѥIIF.;4#=e\6FneAӠyl_dM F Փ G* -)6d%P`1]JZ5Nł 1o& ۔IҖ'g*['L١wVy` ĿE)O> ҟ0 >97~?9pj׃ٟEܼ}]Ȼ'7 vvoyx˯_p0.|B@eދpa/my5hBl/Ѡ;dWP5NGqΣ!. cz]Vќ<`v `\8l*K0ւU,T!X%ҳs/spem߀1evUL+6fLeY&c=5#yY6NnxL)-#Yn\OK"DփEx.un*YZun HXHcp]K&V$b!((daQ`BJX0չu5|.=( Q,:|65Д8WMl7W@0TZW[pZ*FlUf~w$]J|3L瑖h5SRֆ3JD͝_j#Im}OEIsŲs(e9d/sݮz|TcL7҃c-&])+rSTܜ op%Sqbf*=qaD"JGRavӖv+.ֶUE\7H?,3\10~W5[*+N_bH6NN:Xڛ /?vȌ~.|8["#"/g Lf_kXLG҄z^E.f :&ȃvCsA+mTZ%1%{$K{6#AW ;N,> d;V. /%+/rpf`\ `2 +ϴ)aQ" HT䥉<VH%GNo%ϪVuOvWu/VQܮn%z6) |)mo.8}zm>HDG2*H,#S\4f93d9&N9x #,619c)ܺXrj4`f.Vh{E* ))O><,-B #%x䌂 q8=30o+pΣZIsSO>%|O{ zp9n  Ј' gv >]> ieFjNweMnI040uÆu8vVc TfQ`;F/UYYYy`BX/xBz"` cx: 1*ŔO"9@X _C oxj$Cm Xa0`CK&S&VcqX>A*3r0|t5M jlXnZ5o܂G #783۸_檩7&Ebø )'ddsrk@!P=:8#7d D\a-O1ퟑgXi"G0{-Yw0,|3gvcR{Ed荇$ZoyARl+M-/[)J.Ac!`K$DBEw[%ѧr흃bQ\T# bz}q[,SIJEZD';yn=H*!ऎбBZxᮾ)>lP)7^žo"Lt7xkvn hd́+bM!+X-3wZmFa01T[x҇xN0eХݳSK 0&M6iMLf\Oҹ)ǑC. >Kxr40o|)P;j21+{+K{MH$1RZ9֝q6lXT ^roWsH:/;1m d!^6'5.E]sͬș.av.A:E-1c{]*hW~6&ghs"fQխQUt6IЂ۹~0/EIaR($>9h0ބ9p$x kF8cʙ]k}|Z2Y~}hNhBQh`u>҇Q4fK50;R+t{ y]6 5;np9|TN,V؝5O}l5'Qd!_/q19q[ǂՙWclv3{r1Z$ \X4'sB1S %6I@Euk|P4Qһj85H` "/9_!EͩD=9;b+Nچg*3xioI=nQ+ Q/VKbWEڇ-fK.dR{sـ/Yžis"FHzqx{BLTze7qWh7U/83QD'0l 1߮߬!u\`66 &â/y9Tg+L+vޯQͻfڲwW+z6XCX@aN({C۵s,%{2g 6i9@gP b>`TDQ!8!I 8SPT*WI8yLOKhCW[ zx^}26Aow[*eu85[JZ%]oY0%y ՟^Ũ34dhN5wrfrkSb='?IL:sqMԠ" ;nػCLf\0}]mj+V7ǣ-S|0bWWtXΒ$pNa|̙%h+0=)[ۤfYG]S"|/vƳyfx9DQđ<;3f>|әlo xnYyYW> ̤@xML]" 9sǎ'_$)ӈX%ÍJWHO2ݖ\K9V-e|~R !Y9y{SAO- KioiV'aR!5+,jppwy$`PxtIǎ0 'yreGCgfy÷B /-\ɬP~hAAX[n٩~jNaaսhS;zy?Ny)?尻4 K~|a=Vd5Ăg*#PF,\nC=m'[rHJ֏Дu2vͦőnh8MW%oK+#gIQU91E{:*2%k'r_Ƨ;W &#Wt40\|4'G)]p; f՞+P >'K̙L L*DZ*4& !K2Qw#Mꝟ̙3dI}D].lJgk̝"5gѮ;;|z/xG(u&SѸ 0D c؅T>#BO>\ Bk0OqTҌ1:]Dp/.y[Zڿ>'߀FD6#}h MbdžSGJNY+IMZ) &4x1D_[1נAކ{M0'v裋\1LT9a Ȓ0M*S0}Y -`+:U#0$BoS_kPfyKz\-廐3o:yBII%x=>imA.Tra.^PO* 0=4jAPfcdaV9b>FHkF]8TqXߧ)MȚCLjjAz -ya{wgSJj I؋*π[S'SD;?YUDS LDO-3_Eu0BǜOptDqx0C90- f7 hQDLU4N0`(é&j"TFᄐHxJĄXɰ SN;x52oQPΘF7,]=|rN\-ґ`9FK-TEpwܔ YUVMn^,9>:˄Բa_TFd;fymKUwz`H>%tiTM'h -2{q\,U[Uь=k2:ߚc{*VwzZ(gKEȳ].WXqlzPnE-K}(Αvg T@Aqc%#Ӗvؠ ޹BKOZuGOo猃x82R5G漗;E0]dLmv.5G =v6 W=J=44J%ojh-x#˥SqO;d3!Q.əV~zm~ylNcB{R+F) j| ?M/5{!?GHM2e+(lV;fe| cCBHcy=Yr_;k0r4&^`snxwp8ι?_wQ晾;;4PUx0g0ʀ |rOAɽ5yyJ;Le)kLi+$v3O ̜ID/,XFs.TA+W1Z uZ4 W`+BaB8[#R_.L?VvD{UFn(Bu@jzS죅I] 8:2 *^z_+|0o eZ0:uP9VHܴ`¥*SBHOo^9TP9q1^9 NƂrv^5rpedLhN^5r5=g^M=bx!3G$%~3R!ˎE/I R]IYf+1n::Qr-%S/Mi~帶&ss|hݣ:ĎBd!t}Ȋrq\9.L>)}vwК$܉Pt<]qG#ov cT"ls][6+*ڥe8ɮ]q}ɞ)ghFD@i$(^ĕbψ" !=̺l_ϴ} 2&ղz9R0j HJÝYOvmNl z=ǿͶ[ǟ=GYOgdTnG(Vrnze2[&wHp<7K1|0j<ҿEc5%z}楤&S[.\IAWL F_F%A{l }JO?}Q{9&{ T:@ @DXBM4 C档o^ YX<ꯑ7I[UmQ3{xW Y! afok_xq/[+ `枠'6M#Ov L ξA!{S11h1 p{@E)⸴t8+b(ADS>'bIc%bJ(!RFj X]V4 J !(E}R<+խ?`ݪ7} #$i@b" "`PibvrzZ, 6EȸwHEvZ &}U^wQ<.kpnR- gU툞Pq1.(ܝZS-| Qȩ6%*tm^5'xX  X8Mj.aNjåUF#{T6~3Z'Yv.pzSIW(~RlE& +Nh&[ s|]D2SdZZp>KJsi"gm53* B."} [lyn%~`3ۉsxovh 2d;C)0@ٵ52<#.oGy$m ܉f/'u8Iuh5;D7?~NY-yJ. ! U CU"eXģy-*m+uK!2PHBag*.,.хb: Sr-Lrc,E]Z9*'z~9~TJ&G+~6qFoUf=,iCciD1@C 0x Oc)gi*$W4% }J2DR/]+h @O;b9.DW_k%BT<&p@.Ю2'-(!ض *x6?塌HqGVJդU+'\܄anncJ%˱9yj'> olZ~ϴbz@(#/kbjL"RSyC5871 c P "Tq̈;6\$ƑS)^_M0"(Z77-& 1eKyt{6kq1f;N֮ڡYK?G}GmY,k*QB0h hhcBK3J) CCa1S 1/K$^2_Pi&O̱aaړr̕;^9"wֹR{r5WVykR?#%6Eoo_>gXxc>ihQmҔzz"YC=`~gɷ|3][ӓ} xD6@۽1D}R~x΍8 RZSW W22;B2@2޷bV;]bR|pNX/ t*6pi; '&ݹew..exŊ cINTa4_#!aV؄Ph4i!u{aS^نnzk&eϮ g8<}qTށ<:;pxK}qϴR 59#3AdG‚SkFEF9Y#@H.$Tn#W`㲽$#3bN|v<&=@hYʝ㱶qu bq5N_l<V^uBoVA2_.|vkY),D|NiWު`z3X)>}Pp xf5Fca9VLV[LMvWʛ|'o3$Ar;YQ%Up閗"QLRHѣO  @Y7:L--4*]BcOFooBUa5kwW ~.M=E۠~ÛviE譾8YMR ITco:i[\/;6O4? 6M_b 0ymbжpXpeˆžrG .1Vxx*t ̪~Ѱqukhm+] ^%WTW~scF.3 =۹+\Pl\JjUvӍx\XQ;8p!J{v BvEj3K^AqjbFtkԅYql$GMz$4 -hzjl9O^o/i?e>?Xz#i'fQ?y:tY)ҺV^mwZz!3^4. G᳚'pnzMS=畾 ǜvۅayݳB 0XՃ%`C f]4Ul`BP"ddН$&(L JBsfa\1,d r.h .amߐP@) ;@,8aMs(ӧ}x48d*H5Ps$8g113mB 0( JaA!JI&)M=*ڑCMbiwemron/KS.;r^b6l`P 8 =E.Yg,õe%N9%fU9m>Tuo&vZԩpd0UKV15ޒUu0鹠<*/+NcqUŸN0T$T$"qk%ngalvv˂;܈^(BUۭ'u(9),10mX:Pu[BjTX|iu\dyoc/ӜKJ={$Q5;`2>Dr`°BGa!#Eɨ)+Ma,WẔ; P9&`:<$1JT@cKcg!i2eڃ n`RVȅVGW(_>  4!& 6 DPYXB!B;U{94Y) 3UsgK,GHa%`i ZaYf V,CRn9Ps-R4 /BB0łꍼw 8HE5X !F)+(x+%5L)ײNA`V_9r;D3E<8Ʌ+e(iYa2XW$JǼ"%-\qS'P)aXl aV[,+ (x%1?czԄ"B!WǝwR -(|E + ׈S-eqd֖S<<egCܜ-,I(9F[nj1ukr/W]L|Lt2:QGK҉ƴBm!{1iQz1(#LDG^L1MDMW)DɄ^J{5n`J1Azg@9z %V/NPd0yG,ʼA1Ʉb COz|zC/W*-xz]sbVDu&b$tŐc韒m5lkBWA>zj%61[Mho~ru^t?rJ82ƽSQy"g&]4MR}dQ\&I>ON' hҵaw8UpG[:<jLƈl!V|`uxhLM2D;F㠟tٱhֈlUNT<*_<UTZɹC G'/7N !ON@ija,[82R Cs/q5>JlZw>J$VãüVٮ}ӗ]5O>e5zeFO>/1F%zd5s?-&Va9G5etԳ՝Q Oji֩pZXt{SsS7Ct[F ^6w %9z^Ȍ˾8cN3m BPȈʟ~.Aڴ|m@?uZlZm_lvs&ڪVۑWwca˕px_*ejbz b:/c@ )$U9̍49 )Zbxcc$qVY#7?Tڸ@]L.rzw3t113aY-bP1'qEd`KMoSc`6_J)b|)W{͗HTAмh񦮪sDc?? 妗M%{ǯNW^,h D߁Qaf\Q`H7VTWZÄİ"0F̃.)-U^`|ڲL&k}Ö궧P՘1~;Rȱ TF3I DV.2E ¦ΎJS6*-eS/K,pBbDfL:Ɨ)``G׎@nblD#%Nk>34lޭ(L%)eK3 FQߧ )Q\3+"8K (Au%UƄPbx QCT$s yUF%KWF/hň4:mgb'aI㘋rDH B-P۲RY Ɠ۠R!~-*Q ,$2R!0a*n/چE;#>ӽu7Mݓn4`E$`BA2 4S5ܜ4oDy'CPP \1rz^ZrH v7zR0Gb  vհ1&W_on~7CS ~:x|\ofoڷnV&o@+ĺ G0umݲMnm C4 SLi{M [ R;F6Nia-TB9D0UmdݲO}N5#LSvjMlr R;F<(5/?hh@3h-Ts)>jh\ m[B cL%H  Z,˚RڽFTs9QRfKNթP2 ƕ$sa<&:aKAxj}etY/y= &ڗ|q{~<87^o ܚՇ˳fq>qI=|f,\N}7_oCc> m0/a>OyC+,U̧ݍ)ohy-Qjˑ2Hrh@P3]w86!)&G:[7Fq< WIu. !ٟue4ٺg,LbYp{@+ĺ l:YlFS[yA'neA|D5KBqUĈ9@`@1NqMt_ qJS ii4H(pMgPMA$'.@ll\ @,H\U#x(kR* H\;i T@R!oh2H 2h7AqWS!)FI֣8+ĺ ͺe3j@3h}Ĺg(re1X!*DNٌZ6!)IOݲN>NA$%N 䫽S8+HpQD:|ɸpp=Tk&% %EXxb nb 1+'*$؀RG.PfRN5|j 1):˙2H9F˃z)&]~/Jzq/}&7:,oc}zհ'˅]L2nb ^.#a2[ޭ.ju{q~B/PRVz&+w;}Mp_@"wh*s2GȸV#AE]޽ŭ~ٽTp=(ϩvM8Q .-aqjk!ٻ<; ߇l蠙>T yb#gdVq6L^Ͷ fŧVÒ|Trί `8a^ 1iFG=7 .=SV)-fiKJMĉR$U~&ْAhG mKˇhrLm\l/,[p/Gg҃:ےnK @ )f~tL؍"Bdcwu! bbUM ubz +=8ŧ !|)*(nm)PDSRf_WтD TOE]"R$Ti|[#QOܽW=@qcseI kAa) CjOva zKG5dcQ4#52Ґ&0 R먔X  md9UAJrpb%;/ >S{#3(ګ9kUg+NFՌ*UqV^M"VHT|{KRQa_%o<5 "o׷}2#& \bLH ~ Bre~gw77|$m)U!!=#*"?[wf6[!2%JD,W"}O)d՛7Er}i'&vfawO,DnDeD Ȣ(@lSy##3J^94"Zpaf (H) v6n> {/ {/ {/ {*ܸ?* $@{&T[BLy g ,ZB!jM5M~ITRqyڗ'KlEgq+Cl΂cqzzVC}WCd2iν?S]qb, LTV, +vVN'vޤ*A8zA,msL!qN/E3#SztbN:Tm-ϧ>Z ɂ^vk\ 8}BԠ(ИU7 )'r1c!t421bHIx, GFE/ Bc8c#Ym14\!|o}DQvdf ?rnuաV^Zzաx|i F.wr)S1- {C Hfv(ߴYYx Fe;EBhJ;WTF5A6r BbJc=E^+#2dVF1EFB)%pe-Rx-s=*6iU'>d,-͍lC#9la21T1Blta۫S'_ N|[j$U٭eAJƥ-l0eoDW(`G\j(8|M:3'ISz9|:H T֣dX@s6=Cgs6]&LRVg2"EaѪHq9D`;ۥS\ ̄X%éZylW4nPiJ$0%,.Q*M^!J^q^BMΤZu Fq4W8EE7@Zփ'$佧E0xpB W'VZ^+UdbH@"RP{` `fF)cDhc57R2'A &iF Ow9ՂUY|Wc4WوebȺ E^ƾlo>6(Є9ˀ!1")S 䈎(ߍ`fRJ7~iFC tU".I)b$[ Y ԀQKR^۲E3X (%Q&w0Rp0}l ɩ F )]1yhYG`ko%: If9a%9X2v]i N˪U39i =HHYLj'U&J~yflKtd5\"BZ%W%Hw<εP#Kxdwg˳˝g˳:[Z J\vEh2YNc` ;29*L ہ'A{N@lȱ[F.3G.4E.GXҶs'HcN٧;wwC3MI]'TŠ>l0Qc$mn~`՘4 л7VdJ_[BMj @75 ŝsc!ZFT1_Tzd2EB,SҘhSݸ_.ωr^8D1ar7 RH`2N{DR%`i OLvR9L/Jd .A]ltM>|oi4dL;!`2Mw|8g;f\wٟôtz;A,E2 \߯g7vV_> $Im)(v]6AV܀Ky< ~h;@+*xߝ`j~nTr{:=Q2+SEi5Ls7VZ % ՃP[Ma d;8hY4:Y@IH{mX#IR%$GPP LZ N2?1p3Goj2jҒ ~uU†Uד^M+R; >vgJV[7?E:?$ru|8F GV˕~4˻9w$|дӄ.R~ &~zY'];?P[w1W=ӿ`P+5hd=0XwKRM=q+gѝy S(+$~ԽD[k,ņ>kQ9wfoTx63p~c\m%T};3~GM7qҐWvGrgW>,up`S`g5S~V1闷8|鏧{=Lo'{W.fF\0k1;(JOvڃ?ǃAv6ѭL>-ßq')o bv9 QV?a>pB=S0ww!`f*?Φa"ɺ탓t0"=*dNUe^:Ҙz?'FF­vpҭx,He`KkH5k!/!:נvuSxҞKb))p}$8g>v)/z{R0 #w;UngoRk `<8# T;MY˘o?)L;f2_ opz ctQߪtÒ lO`0Nv5$`a4fn7VҼyb/X]760G$C_ZhrZSpsWoJ?藅c~A(vΙD݄)T|2-ET(ahMTcOh[WJeHv:J$f :oQN]m7z(ʅ8EWM~0'%tJi#ГVM|U]ME4sy7bD,JI%ധo #嚾o \2%KlyJ'% & \^L@J؅︹Wy`38h"fR'X$p8%VTX)  7YapG-|Dt1=V؍ޫ*7D:8E>>:&K0OҎkף"r8t UGyPQ9кNm~ '-&Qk듘2y)BNu;(b&-BF f,,֡ƛ>yھ3owBbt݋',؆|RŒ%L7tJAX J4 C?Z>kડVM:#Q@ԣeOO()n׎|}j0Br0LK˭UjJ!JXuJJٹ# !jI7pO []:}Ԝ*ؔRժ5g9mz 6aZ4 kH[1vC(G FD9=]Yo\+䞾Ed ;0A< <;,K3A{Q/j>K//R_Kx*TơB, }m!TS.KˋUD5z"鳧pTZU>BkR<9%nhZbU(#Vu!6'NH+Sh\uA٤؛)L;.}iCQ)UFbp+>9T0[Y\:2$OYF`{5GbY/nΔ.B&,ZEa.Vٺ>mbr3x@U`]f=_å{?ƇmEeE*{أ8|0yf*}ǒRQe;QNRw& GR-l}l7`ڦgo(daMlg7jj"oL( d?/i[z 6 cYTZ/,Z,o u Ϥ00i Rپ )rÇJ.|E!oEι"2v͊j9f,k'*LruvW=s>{2dF9ܰN sWk d~b?;,yznd̓PV}td3׾*0t{,޴qFVAe[<qlro>!;Z&]\`%?)ȎxB#'#VF۶|k}o^?hCgV8N4MX*,.6"9h,d]g^>Ă~XϮ7 8q('/>`4lL*O_⛏Gu3FqgM_kH]DPbe_N ayi}_lvntPf<6뼖5EaS~ #U+j BYVΗ}aHYĦݷ;쬒<6^ `?)[4^ME5$LUK!ScA槺MQNm`i0ucOO0j RS, ))T4T `ERu.ZQÐ?ݗ$1C CÓPvemwS.H~@8t#*J GFZPMe:!2jduzAZgA(EMZG%[xx!XL[eF֘Bv.jeWTLGՃK|9-[leZ-R\6R岑*TlKv[Fcuvhk( /sR ]P99U\"xUݖD2|o~'`Tf,k@%\Uw󧃓] rJMMGMx~/V+wE9^k䄟_g6O=vkBkn.=_,{~p/CeEk񦼪z:f9M]e&,%`5QdT\ee4幮ZV.ʶo"j5 K>jߧ!AZghH,ڣ.сx2:t`C|vMv˜Y<`1[Җ9N!k-myN,q]4Nԧl1BȞZݗ3wEc^_ W`cULu8'mw3^#)ш zW }zQbAkܗHW]a`7UYc82t-JBfg+X8?T HG{?!]4s\zծ|.̃|*.j0w^i +fccŅP+W\=V~zF0$jLbQ>ɑtTŕrG%jP]Vj!kScOY&ǎ@'\YWb***5«PaՓ,ZJ*,V:8-5@ʢ2rZ2sC Qьĩ*_LMՃ~U_ⵅBɁm;tᷥ5wXs5wXsmk]l%'*WĀT\)ƋâtS&(KK [F~;Qo}'_j Hu>F ޚB?Zk},A!nxz$޽4_A$].։F7m("unEeԕ*&8J!Go <&y?ɸ?p=k?O8wtbws}Oﮝ؈H譬~wMn{ln z9=& h+3|Ҧ/蹲tsz, 9v ×XigD%n[>JWJ MuPL($&Kaj1Y.wP} ]rߤdC{z}bbP|%CEg*}|u]vx|Aog2q{،n6lv;Ņ ?M{f[RҍCyܜ٤EqIZozL iRq⩕:(t@,;$Jt atG,]vdE:B0rҒ1 Y=0g*ycHApL7%cC`yD; vɼ#S#g˧G}ȍ3~k:2+Pu!̬XV'ͧ'Fji8{ČP l.oM~sIMc ϘM1xg]/jB̊jT C$ WͶ̱+ .IcxI:޽3ar  %oQ m1RhSx֋F/?}ܤESW8OMZ+N 0\hYA74QM8DF<7W33RݸkvlȮfLQẫ[V`M夀"V~ՍR3@b1nʼA#Pkӣ&1y"6́yʁ$.eFli Z@'SbM~y# *o}$+:(PL!vf%m7^PA? bnrbw?3)y>ܜh8Sip0c'f/A-E-u@nL742V[lj4ϳatg |GN٘L7 3f*a ^˘IYGD Ğ36)E&JpWҽiД.˨M1ZIјK CM1AurIDd^}-Ty]oCo,KkB V$ky O5`t_Ĺ^e JT^K -PΦ jaQoqtx} *1*H[C(;Z qP"}[k= 5Ԥ a.o<ҝ5$h$}BIJ̹0F[xl듪$A716ݼB l"|dXOuy}.ovs#1i&ZPBX=_Ţ߮>_64j$-}2 *uM]dJA@ )YP!e}Ry\>[V v',p1 Xi j9@ BT}qI`6)xS26ԻE91-+rHpz ~k/C"\J ⑄ZwJ!4"gւt􋚎PFڕJ kC?|` .7ku;Z/3{3xQvAs-%ݎ{q?g2gT ˎ942צƤcu8uG_f́ʎP5.m {ZYP]ۜ>=neȘBH:36aq΀q2^GN4SY^/jU=NGcrw]3m]**%Egw]j5GZ2"@a|m>\5iWOQrʐmW/c^2tv"?gmQΝ:+g6urҌ<Q{TKn$'{:J94 |vjƤ5|Rf-NZWQ C=ϟ~ձc dlCG)>Οe ;O?*II,zˮ|1.cAcN;*! t+\Z hJnٙdžNQg{Q{WyutR¢k:ڝmM*W%=X(LJ%$?@iكf5Kw6+u^ e-Sܞ$.N` +F"&PDoDuyjoS%^\'YNWŞ#-tYi-ɡ+"Q*Q"Uj?4o-EOHlA_c8p6[o]=ej|E !ށJiY8PsDB=Ej.X_Y˂.Ce$j;L7/ڐ e>p jd3od7_1Z [פmu9&g7jC.!:+I֧_dHIJ- v H)\+5EÑ1֔2F:-(Z%A`_k1TJ|iT BI̝'0?ߧ~6AoRi ˜eW|gIΒՕoр\܌1&^@HZ +sX#$Xk?|_:L@ŵ@fV 33h3;X~@ikV9:|1;\a(8$C9oϭŜbu1g麘u1uFPcJ5R7JIRhP^QARǥiҹetn燫4Z%洮7d 0E6e'Cۑ:r !#Eaa>x=jstwff@ƆUKJ +! 1(:Y pDN2£eё`CIdJ N{e1'hDӀ<0Xc'*0 P**F*ԏnj3zpKYiU;]ZܛՏ$. LoV?^M&+N@K oh(@XjQ#h)FKMWzs-$ |{ FbUx74`J(!!sH4\*żJ 0W4T D!q,J9BVÆ?P04H)}p9&Y."_Kx4(Z X $e( FA N+# p+L!3cYxcPq<Ě*VH%J0 1U™# <pv #X`%@p=2̺m0 s$Y*ͼ>N XJ*^ZsCcF4df>v5JST9i3a#0 ܶM)*p4*W,xS(*XiWR QQa&L`}QivI](#E>rvNS2Y;`¤F79f cPͦV`_|ÃӲ ^iI? o_i^BH`fY)/ɴLHG,z9ޘR|ՕC7ݜy3#(0?3vaZM FGj! Sfj8ڷN1_f@0Y8W@ nd/ cf7 |&7km Ry0g~z_%y~T+`><8'; c7t 8xz,DL4qDǿd,*򍹛/ jMh,sL*R/іګe&|n^[xɐC(_\'(PθbOܫV-wp.uqF# `p;T*D3XrC0w77k 9,L~\45-m2͑m׫/=|[ο2?&6wdpX`kd֟\|Cx)J rU+u.ExL&`l7<ڹ7Wans!.=lqiOrG$#JK r07?ݹ47)q>ҺD5}K-fTR^mT,E̸ps=&:|ᣟpIrqy(A(/ ѭ޲LC"TkZ`YC1pᄉGks3*甝cPΉ@xv&,7BmgI\AW2{#͡Vly]o[FMQTccfaB\ ;0*$g24SDBr|22j # RU8,2 ">Uax:ܺydyfeZv7w<뢔9'ɀ_KrRpxUᵣghZgzX_!2 Mrj3N^f0L%V[Fx9Ҥ(6lIu9Ug6z:=wo북M̽{ʯ(9 F\ABVe]ch # fy߸unIǚ8;- '9e4w93YmS!gxo= /8N-|Y}.]Z2 cc㝫 \=:ٮvѺܧw(Μ^+߭nާÙ=-bz~:ۛث[nmk#YHg>^d}uKsf=Zs0zF:Nt4hK9DϘ=vn :Czo2yHңJdsP\i3Aဃ*%(H(h,~-4JIeFE9b{'dh직+kEjIvOW BT!$2Hfe!j' V@↺"rT/ _s\;VBPb i_l,ڃ"ܞ!A xůq+cݯ[zNJħt$Q4~eLDy0.cZ;}awx4Ugԝiq7emJW™$tt[q:O|I\aQˁYKjK3f `{MC"$wuW@]f`-B4ݵQp`H 8eRiuT&Hpg%vS-ft 1֠~Ie +j+(D +p: 3ˆ@8cH010 h#53:O@A1A)@m"ANB{93Qը) ’P2&'pBڄNq&#2@,MDRF'ڨ{Q5o Zu^yW l$4 jS/C.ƢP_غH7EbqCNٵa 5tI %}ozZ}%eBfc:Gqu%ZԿknSް:r9 .ɻEǷ W*Oo߿`W1@kC*5^WUWwE %sJ:ʥ|pLd4(aDhABI$  fsu=ɥym=2iCǶhk|; 1b% |0]g׸2.gWnB~/B~{mcP8!%u#=|L-Lm BeL(2V3VN&[;y&[W7',DZRQbtU0>\5ˎyڟ u ]Tgt9#`r-qZCt g W.JmU%I꣸ݣS}Rl(WDZ,U?uXs6ڊYJwZiEFZ Qs ]i/ h,sX9cvvw77'$\#5:27L.7o.i6`v~}]U2.Kyxy-wr~9f-Zͭ> vLi[(SPbooqoщP/n;(1"4/`GS:pB8[;)Aw'I?@bK'(᧏zmZ>ڷZvX9oXgc Ѧ|4͆3ԽW\]}EV: k]vORQJOzTS߯?㿻"փ]Ӄ`M}[\~f]֔`Z |7zSs4#JӜ;\v*,sQ*+PTVNXC97 Mzצ[}[Cƴ3 ƣn**"G*IUQ]xzѻ۱T<O*3HYIFd~}Ӻ5Tc8aGLW#K"lYӒiG#'sk"Z{oYùٮm*@h:8iz/oқo yM~4 hq o^O6w[AlQ a\ο|=rHl@5 9Q$5"l n6 㾱{Hs)/e'1M>KPBቛ|1̅[̭A.Z79;}QȗJbīBecN SyMFU^4eIkRGFKBϒ'sKE7[,gifφMw7EK (H96#6aq씤 kl!dJTΣ'\v٬t%"[ǥQr3\r\Fl~8AYִL(;O<-^b\S9b߶$],J^;)Cb-$+:!h GܓiW g3Řm6B]W3xsI<,r8:T[ q$#:CybFK?'~b'v '??JJfA J*1xN|eY([4dq y:ξ~wc/_{OfEߜfyrH;ϾP74ߴM볫ʀ 0'T:jLK#)6qѣ`z!-K=뫺}X3Ou/|y!b4ڿ[Ϝ!э~?[~js)ǝmseTco!+PGGh vN`My1SFk\dr.L[qo5&Zƺ+%xNuR8]av.hf1i><ǜyٜF:hШ9g:QǽVs$g$s$oV6Y(]ivt}jO) qk, - Qa-f+ik*YS11ݵC\c\qW= +oh,PyI@f("n}_$q",ָjNKݨ#;:%%X:|I R HAAm$)#1 c!1Ib',`ӵ"^G~Ѯ[&Q*BiśCԄ"$#QA&dI'%73Š˘IRIQM ^R[Cn2/w&?Zgw-AOEj%%TADfp$i@|@+>y@ ؘէe: .%inԥ:A1g>X5"aj<E'y @K{#NCbpVo<]-9Gם'DU`Zj2ZZ%ɱLvor/n98%w5\ w ?&|\ p_>z eYNqs6ۿ 88ʋŇgj{_..3YM$l=#$`~Eɖ[RKuܻ@{zǥ&[fK!1c ;C VF 99b8IT1d AsK$K9m¯%F`)1"x= ΀/ӂ!0Q͹(#&y:G2VCYac͈$ ۜ61hHD+ΖBq#0-UZ u @rĒh`c& bvgX`2b)5%rvRF8Abm^rK%WSMכ߾-5}Mubxt2SoRcyN~<½3]6gǿ}k&'L 3MX Nwbm/u%#M :b:{ID…HMGP{$Vg @wز ͷ)( !>K\rNqjHx1 4X82NEBa΄6)*/[.T8k/Nn@TN#|`.D  q["bKH J!P%cnu/vy~T; іSr፰Hbe|mNIL7TV ?# 2;BgȆ@+)NCڋL 1k)$9jt-d5| ͕䠲b.5*I #a{d( t>)† #dVO$"͑&Fzy|4ɽ7 ,I'*]Ha4 )3AN' *b_U3{~\7x2,%N rk.1O!TKg`;R |g!:="(#i:lx @xb?6\1fq u!f _$_9Dz'F'|@]u~߾&W8b^ OX_s*6Z>iEz<>xşr3v'2̬Jj6U^=v={qbï*m`J@\L,,YK-R!l |F0l$(LTv환QӜ];ϐcs!iC` Ns_{A|+8O ?=\/fN.8p3wq?[?c*FNìHm0qtv!ֲ^I~!*OA,?-WϦeiW9G!W5,?!я w\NPr{2Nυgjej8H8R;IaT21ˉ2Zh?vZ}yAHkLNi5Qk>P>2)hɋ;%/6b/SGŽʆy*Q,^rE :(GG/ G` r4K^Tګc_|9}dCWIB\&7 ڠ(əD/ b *K K6m؉|<>̒ 7.N(&X͏)~~L/YYU:a:H% 7쒔wJdD4xraT1=? H' +Q#}#D G%S˝2-mY0|E!h'GooF{Ov//6vC_}ľ` >jlNWnOg?82M=|GXp]7 4emƩ@+[o'uݯnvT{^q4O;hGqvo9IX ,@;ZYD)wS2U;f$y&|Ԟ=zoxFxڡVhkse/ID;\b%:76}DX\8uh?Fgɧg,!sGGRϳ /g m1ݢ*xrνgqSm<:JsY3Fbf I; 7kvo9A}_QnabtJŌRDk?.sJ|݆^):7T% ]VhC$ޛ BG%)D%HQ"W*45 ,v"LzbB$dx7<:oގTZ)IKhC(20dS-T$ {ʦځ ҧ5\Qȅ#GaQGrwmlW+8gkahZQ\y,W? {Kp-w,_hY`hĪYRP1jjF!JYdAN-{դv (wzWC*P/HDCpp\+$Z]0&K 72Wꧻ{zyQU W#]n:ӐY|Y /"CBƐxxp] '`b؇D t ' Í55CA+õ"k'U݇[ŀWU?F.M(NiI-gCmSVw3NB. 'v !+70iN[-DKOB3O"FSzdRU)y~PB"4#bgR9 ' B$8-PjTMK$u q"o6[$. ]XԦyID…zܚz.$!= + hq)eBhP=bqKJXRBĠ;I5,eQJ3"T*},MĨ㙥Z+K# ČXn5%)тP-qRTZXhU(cL y;w+_4\1Jֻ/Nj;cxs~z|5%7r~ig;u|5J-c1e*$F߼\ho~faBBn_?U{~siƟ%yٍ ^f!MY0Gރt"w.7o(V{P܈_5E4J|CƥvŠQFW8H4ݢ'ZTU!!/\Dd?/pwvŠQFdp3IuknUH (A!9nNn2#(噶L4Sv4v= $䅋hLe^sEڍwCbPDtb({*pYvhQVp).;<>n2S߿EAщvk/! pn-ݪ.Q2U?nAщvk1S8inUH (7ڍdrZn>h3L [Dj*$䅋hLQ;w$vŠQFy)%ՍinUH (\Nj652?jWV!Kpr%(BpG2ʆJWd#%<^Du>pRCʐRJoeӸ!Ed5h,xj0M3b%0,zWl!Z:DKX VL!VJPYeY_ S.&eKqD]dYt%K*[q"kCdҕX"kYr%"{Y!6D֪] Y/5jWw5N`CdmUfw5NCdmU4,)jz܊@f`+,IfWLG0Dh>󵾱{جe}<*3ì,l~mWWh~:t _^-oo$ fr~|z&yߎG5n-(3ƿFU cٱ?ٻ4$|o^YGswnfo3 E=-F8^~Y YJ!`/H7Z>UZҟ^Hpb8i}%rq ֋߾ 8]y]ټ<Ԕn?C/9UYԷ5`ђn̶ieA[ǭd|Y|ya+'%;?fD'*D p$&%Web'Auf'ԭs_P)B2 ng?I )^[2\7^8X|wfU!bܾS+ec5-c%eg=t|l6~R:7Zݽv0̸25OF?}G*'1ͧٹ^&+RѰ;Rrx7MO7 ذ{X-ܩ>P$qZ,3Zoa" :9,Dȇ זKb"M2,5wʄoL?BfsA#y@ #'¼Ex4ly(|%/Z\&X!ݭ(Ҳ!'FʺRMrw8s1 cߥ˛F|T -#k_ LQS Al٣:~Y4.Gs*om)0)*}ѐ{ap* M{Ԕ4rx%(y^_G%Cy,E4|{<I?}k2W8?\c]L>- [a}^q+ԗKzqy,ٖ-b?"2ʬk"uӬ}ZIa&PK(ILCg%`%XZ `iY0}хC4_`F ,lVK8&=*L*Ϭ%w(I<`p: "=꿂Th{/aAq3J~\_֒_}L.kJPvQ՞D%HV ll͌mQ$e5,9Πc'0E*ei]\YTDvqܚΑ1E"1|tGD4P:bY |a~8~ޔKUmIXeWzzQs+]ї \&pt$$NeGLze8e H),}5_[>[ [8ʪih>4ؗ~XgFd|uvw)%k`U_尦gy><Ȧ&[Ua]eo]З>l/DZmE1''yzQ?Ӣ?_ 764cv֙1~E5>,_J~Ci}ھal$ɧv3ËOPmrHvpmzBuk W/`D*1nqA}ԧ8ZSϦZiؑӳYP)?_>cPOy_jnơh`,PyԳr5=1юzB"^:DN ")ؽ$01 >y͙s%d9Zc @S$*Yǝ`zS@Zb}UGBI>! G hty#N<|KjB4{5J\ V*@9**F2.&P)ϽtdTf& MG?/`(VZA%s\R"qCsS   5_y.#Vn㋼c^&6XtM~r9[VY^ K_oˏ?ܦJn?޿]`~ "U^& Bjõ?Jli^?E"MSNgYvђa%L6ڲ4`rv'tG'Bm/T7lrO`QNgl ޣ "z|QzN*:{6*?oD=}Gȷ0$UyŘOhnǵC51Of7Ҽ@$˝ 2;P%]!ȠYJmn˥1qtw1j)pj3G.-jpOQK==X;<{3mO5 3r?,]nD3ǺahHq k>|Zȋ/;SVwXL ѦdA 9@& aR>zR<\~dGr݈s)|tcaRqB UQ+1, Թ@*62`qs )m>ry>I n>6]`T7u7u?#yMݼfMU- UzP}!vLv,feiO~J)×(#V>*kyb lUW*eWrS+^BJ E,qP|~'F_,[|[BչhDsPCр~3; DNcg,w#B{fY2`ŒTSJS 9 P `JE rzyёlZܳeEg]ئ y9/z&Ftyy$ jw"_ GF~N@p*{U 4jc$<9IFP 2)w4d*8 w4чM|@T)jT)~'wsnruZU GR5'EjtNÐЎk5ӈ| Og㙧M:nB(]RiMe($jCLj 8w#64Ě4z+ǻ7Ƴ6x&j0&#=w"PsF|_iv,{]5i}'r.{;ݼi^UqRQ[)޹ɸ$cmYWV\F2M`by,HȿaR4Ĭ9QL8׉(5ɯ+:FQၚs :HwKܝ颤3yg8hɺ_"e/@_8[z|죪 {QvIl|kg.gn!ee&oVȱe8~v"ILiFi'-Fpׄ'|^JPfoiXY-Ѭz@-@W`n# GEPq(^{띗h HiB4Md ynqcb_E@ Zm a 8|ٷ5J ^O⇑Tw_/JTpAT[I$WF  @xF$R yBɖlQ{wM-`}y03Mfj l/%7KMe,m}/mcxVjʱr=IȃeaUXؾ/xn[s'D{%|go h# 2Md_P2I Ƥ#bFXU{C] {)CdO28xK kJڝEK2*0aU$*G9Y, pBp'¸2Q+c-i *{'m 5:TK@Z5{GyX]܊#UWÑW =Gt$qXkLHI0^$Eq Qf6wV5yJM6G֑s'6GV@5gtiu dt2R͈t*1RZk O( <*#1@'fލUNE zM`ފhp&tc$B3|Px/4ILʃ D*Un^$+}7:sQ,"Rmo}ϡEh.v}~;9@).ApO[ۙr])j_0!d^8y/ S"j,:*< #b4 #obZ+|N|:)I܈;pQK .\: E5e q($ 1 3)a "b%P9N&2n ݌Ajn|l4z14ul=DtRP:fj=@Iт c( K!Vfv4c-;n)v)v40xEgAC\Th. R%mIiXuzzMCbr_,H9"1ڧ!/,OSə؀X>MfYӐEG#QLw8@Ư6d&()0fvptThp-Z pC޵q,B%{T/z]'Ax@Ę"^,߷zH#z8RHp*|ux}&ɷr#ѬBBˀhr;kv~AĔ(G|_,|$k#}; yz{U1Nc! iLP٧\$"߿{9/nM>0a(6iTT%  &cXEa&pL erR,1&i2 8p{0ϗ X5ǂL8+Go'~1bY`YύtfU˵@6%XVvlj-nOUYݟSÏ#. \m5Xm5?Sp$X7):8*ES{ ZCQڎy2?mKjA[ +`=tۙD;#X(xQ'k =XakN+Pr1l3k"=1)I(b{23f%zz+. ,[ōQ#K$A+}V&=?_ E 1NЯA0QbM"Mli~i4 ]Z;d(R. у.OUB'N^ DjNnDNUbxkJG']Cii~N)Jm.p?iCj6FhTl2 vR SYNX2}k 體AM_խU8YlY$p }6o7YCzͰ3/}7]kU*1'_|,c}dWR" \,}7]\QL+UH2TIQGbLZ0P=YyœurI&~Ĝ| t~9&^ƞ#r61uq D1x=}t^?|7k{](+:VȺ[UGn' &ҋ}զӧPv >Vq"vf:e*tX_# +Fh_B"ɺR̿G#xUj" Ā@m^>qz.nwΥh70̙@kZ/xLqm^_Y)uK ʗ{>f8Z[4Ubm+ Z'7ݏgޟ\Dj0Mr͝,Oٛnt>:n7.f`%u #;qH:s07V\@\W* Άσ旺Hz˫bؿ& }~m'p"*ho病 i P3g IˣZ+]*ybK_q2u O4eyoV.BZçtp5uҟB!3O!_ '5?e (4>9B`XMZ_ _]-*uKrpWRBU閨p(OO9 sN%*̻8T&%G W;!Ng5%F.!,c8.`0b&YYˢYnJ;Òj)`Qy|NNg~,IycV3`@U=Pz<\;5$ C蜆_5pK1^3THvkW R3P#0v qRkmP!M5:ِ͓h_\Oم6}iͫNY3%8 u{;D[OOQŸSĊ[ϝīLG[39f{;mPZ;&L*d[3TxR oE4CT5y8cU-13aekS3kl Ǝ44B[ǃbK7 x1!23֙ьW~"QƢ慢S5ՙΧ3Ahͮ؄De!:1P'>@czkcYL`D 2l>Jk!JwcpUaCBx­>%7`: ~ bU"*. ^"HR55Y&Äp|y[2) dʈTox/)f^Z6SK&X8&DA8oȦ%>Y5=.ǙHFmp쿁zu˦{ E{X1MI{:Z:L,8"bEεs:" I5[LLt'X)VރNIcfQ1HE I ኉j(Pè7LPEXWè7 ͧ!KS5iRPTB Ac;EF6 Y3Kl7fc:Q 5=mlw駹Bz3On.I[tl8ΖCp4iAp{yLӘ_㮛tٻn#WJ!P4nP NóֻHlHs|uUC~sp&rOzFbl0F<\F?ܸl: -x5<,O˕R#(mnA06J5nԬL ꁲ]j#2+=ҪG`7n)xJ`z@TsF~5rLdc;z<]_;sVVIT@*]Φv-6C@.'a\os?\Een]S?M%+&(+;rP2[/)eM(I$N. ,zDTtm,Ϩ,: 39tfBe"cJLzQ_횾%ʎbisfRтqs%+2«RL!$ciVBjmrܿ^?%yn_ۨf~%.,W!g 3ϴ`i<ӂye@a @mwrtB8(2B0 xiC"C!_.7cʊWdG|M\ӸIetVE?Wjzד{2ݴ)>.`/04= bZrhǤ)T+hQs \{Af\\΃W\+ZcigRd_xmK*j5Z7W0$$(֚tTkdvD'_Q;QUS?@P j}OSs%,w?ka˖{'5'YN GY;o>q}2#b-qMf.=>r?#6GKfѐp%nkR0Q߬V)'@[-ԇr&P9]%R0U3ByDJ dғ4Zpd"w x~iW!sXmt𞷋>^HڙMkKNWpU5_޼1e [8(`"Z'fělWوbV<$#yr5owح{/H)ӛH[Y7DнƊ̔3{\:Llf2 }"C݃B_SjE#i?D?UWO!^^疮FiKJHpԬ#5" 9-EgrU|K%.<$3.g8$Q TJ'|ލmZqe|1K%~"HZI$[$U|/?<u]k*0}(i:)XJK}\ =>[8[Wi>V* QMP^ֿM (\EmNz^~5d/O΁ 0] pʞumJR,jB[5}Jm7 :NϽTSzm%9)A5)q1rQNY,wQN)g8 )F= MH=@> 4RfqQ#Re$륜f>/NَSBnP{ľ qRA'iSRZ<9t᭍\.TRMR[h Up)QBˑ]RE jh:e@ 1/)EuqI@ D* 駘 r1NbەOjU/pI'k(-vs+qa ydOH~;-Ny<ܻoSkb3h-FQ8Fx0s"#!fC5be%OO(j ]NWvq9TN!E+j޾j^s E?V7Yx([۵ׯnZ(vcT:OZH]ZUPHe=EiCxZ!\VH@fFzeLŸQ`t\kcrCzŔEylP#t[3MU vԚKdWGEc?Hn62R Ou1)Fke׭tP_D?۫oxygR?!_/~]ISuv[Sx T_<>)z@׃*ԡnvUfT)!r@L&=!P@*Ge9f23 XdW!(J%ch ro)rLnn UL?]mnWdcB% d7s cR7ffHɡ:b=*&dΥvE b9^GJid,ZVy1S\VYz 9PM薢|喲DuJ s磑VWKˣw*,PVlQ柢mQ:Ib\W-*z* ngy9['>͇jnW7did/v "x׀[p9* $"ڠ>:FO9|ĸTRg h1p:!8;7JfQG"huSQ5ob-m>' k#'c.'a"AXmCP& >W!۲ơä.a \WFwyy8H.OpDkYrPƪW-kDy lFEX3:,P(Ajb{hst];SYhc}0 iݚYou>9ұuWE]!tF|rAȩ{tt%Ye: U1Nq߮g'ruR둜Gkk:z`Q'xI>nwfO^Zh ^夠ўi#BE^F;(Ј^+p]ݪ<]ީr}!ӔJKd=ׁ>BzXp]W=\4ڤʮ՜=d=Z -2T[)Sֈ4 68@IѝxO=T&K/Ձ$P[81Rig ~u<٨R$jG94`mșGm Y"ap`?(pXimSǛRY?N߬:S&囫ϿWong/VeXOoe_7#*N+ûH&e^|ۙA~-/BhgA#PʄA*,h-UnV!xs5}=U@zV@ IɻypFbBdȕ`Vj#Y(j8Ѳ9T:([ZBY3`NUm*d<G PYC4!њ›b‚1HHK./*<ۛx{.?UֆY룏R$FF߽ۑUԁ8zx;Ж *bȎH5`vp==zm]j ND%Rqq"]:uS׹{ 2#} ^ cusLKJi,U mU=TR!r=MLjqQ ]y~^2i% H2@؟~0!@0 MhY!8]#xn\rE 5Ro"sV<6QK \ZnyAѰ_n[b(mDS-K)q*A>T8 `˙93nh&n<5oAJ^< 44{t#*~xB4&G{f2̏D6(]IU/pI'k$&Ηxdj'$~>`ZiY}_>t:% &c:x!XcΌ3gTgV9TtJPz%rrXD~egQN.t9^DR9Qg^-WM(-yYxYTve?++0qD|_3-JvEδ(ٙKvʬ@ KpTR;i3K繰+iM3ǫUg 5b妋(2+(7;CS̈́l5/gL s &QCBGZix_79ve'^.3Ǔς &95Q7\.G͜ xy2[4>βfx|U᪜`Svlr Tee z|VsK$zFSϋ8CxU`f y{̃sk!Dc]Ϭޠ¸\Y"|/#01*p,4({Z"I ⮺ZAABDXAK0 ͉@ :"gT`$AP`X`^̀oXD B1 "CT#TH_[#f'K{W~>&&cPt]0/6JO{|d_S$ݪg`#w{YӮhpNp<,~``Ts+l3$LHms[@6hő.9/HntB`Jc\ *g$&X,!N [FP&AVҤ[aѦ{N$#dG )&Hė"Ɖ9#Z1 ^(E<\)2s.9]xN ]$ KΰzmFSw%!(T%l7#-c®LQ(mQCcZ8brl( m-gt[I\p+u"'/zNP0[ qK-ZL)A%A(yۤ@]&F 24oHTDZ*dĠධWTP[NC5; *{pPH2{@(FT$ "@J첞{ӽSP2Šz> U&d .<,)@ѦG($L؝ BG@'Te!-I.btA` ЮI =dטt`r)OmS\ݠ0'4N$ԫ عw Mp7Ԫ=Qv'3( ~OJ3V7 ^[]'%z'I=措3hC%>h`ԡ'7?'"&N;1OqG@ɻo?W{>`X"6|b|4ӧ'ũtpk#[x8q|ZKнf)t f?@*(ԡ ~ާ_7`a闟oom4m胝·^Y-wYKQфЛMp~dxaStyrq]xr|yMWW>kw޵ ЛON g?@lSǖmFW`9R͆ڄQbs\w>L%!L!@h-#oEAkZJ_ln-bsWkT<LX~ ~5'.?3{ P/<܀ x0 xX J.|K$⃖A˂,R/>h>.x{Y N<ۨ)&;#}6xӲux8I:(Hm_QRj|x3ЮKKI$K=^܏{9iɻ^SX'/x6$K"n\=/k藝^w{hk,K2 χΖy׫4IMWCl8H׮9)N[}P xNIE0$Y?cW-w2p4q0s}; x߹GnWLq][]_@g#8(|O%:G9}ss=(_f=%uȗ%UqTh思.E^ؗ%c,j0ut-=J L.+B3ړQyQ1p8']B=g"L#d0יV>o)D*G6jk.KR6.bL0oNĝӆ-qVNʛ#BNBf.́jEK+Ǫ9@'Ԣ18HNwSAUmAdvu&gvx'.˗_/׷ c[{?F/^?Fs:[AjJbi%ji}jZ}]Our0E0GYN4r9ɠdwLo3WGZWbݧPǷM앫#+,&qճ.ߌٞצjԬW4oZCߵ?े_ݺws-.=>ϝp|-h}TQBYӚ#[WdԹF^bX\]lˎwO0ئ4 -s]ޅNՕLG$WZݾ7j thWrd%4 505!dhjKiZ.&R:\'O(8h4 ĕah|7EI Vce]X^\j᯼jKHHhٖpZ܇RJhTI'f"vT꤅F(VJh􉫙wӕ,',XBT&e/OX-̜p-zYJ_<],jD8&$<&W :`5V f󎌳Ԏ"=Xmv0ϧB!;Wep̺Z&iOSTLұHȿݡz%P iGV'-YC70wz&_|,K:ϯXiM]V¸SK(˕; A.QWK3r.&, 3tt 2+tEVt[M"Tc5,.cީ_A:h ^ i5O X+<}s ])j2W3u?_qVvk J^\)5c=?Ǻ`^yD,kt59=?q7W3o7&`ՉS@0U,e#paeSu_89bXv /(53"&8 ?w$䈧2?lb,ޏڷ O龎aϧ'{WNt~0~_WwwwOn@8}Edp!CRas"D@JęP@C*Oכ E`/$U=pgfCثkS!aFMdYNo`S/:=x__Ij9MoaXx-~vgu!hOYqӄon7Uab.p ۞fh#O?;|KanzW_j8rcЁw/X>V nKv&L!?уW0Z`53˻9NU<48X:ͻ` \|̪=d_`_ ,&7FhbIAS[Ý!a7 $4v%+ 6D? X-xu'}Ddɳ₶Yj&⃵a٬%K:$2lr4~01a, P$QHcAq1Pd1@o15I߃ٛq][o#DZ+^ zsva{B_RBR^&)rxg3CJ+p$qUuuUWuUB9P`LUm%UAYfgBn{:j=%UjP\&Fi H|DFj'𕠶PoĥҧI]gSpQưX"`q@ox~v?8O_{rnލ/{? 'ӇU48+S0?N_*N?vq3M}or&vT:al+.ba׫I(m/F&?#/΂O\]<CVwBkV8m( d VRZdiK:3YW;G^L0y'59{51# ".Kn,$a4k# I!-Hu9\j]j\w%M@PQɹ6ӏS0נ8U3Y,-us&`Gv)G`ً?t_TqopM}2\ gλ3%3w3n=x`v&ZbIl~F@h^N0أ5QK_a=Z%45۪|(%HDU$`0*I2$X`A"\y' {,"eZp:qo+WyFKgWg?DLKƚPp3ZGX( I|FHo D15mPh cR,Trm:,xX >pꆡXxzA\N̤G Q1׻ԥ"vieqr\3ꥱ"hJ]Bf%X^Gd r478ɪI{c^28cFr)\! N68ZȠ 7^Sua;Ί[Ķ_>GK2}hX=Qs! NzGlkmA65# 8o~j=Itńץo(фj44..~y9KZz"x4",! %=/X"::"LL繁`Ն)\%AJkeCgBȗV˘iՎ%A97T l V*("dsQ ͩ–TvvE}L"U-׭=$(r ``=a빧 κ%sO]K"\=Z{f- x-܌7]VSs+hB"y98Pcw9|ݞP5I1[k,լ5|Z:?%'`ADm };4Q!JjM%$FKkHXuT-`9k˳k@~ԩyt7;(}/dG.uYRS⟇*lbG\_Rd::keuMzbdt^tj:cfBw9^UbTkW@m4%fqGq;Ch)&mX:]'VjhUzxzSՒ U nQ"]>^iWvOwh]Qg6vu t"9al'-3_ q'ZH}׶`kHNU!ܛV h*?&!/#RZNdKUk:LѳV)OUcފ=qV'.)٥b{%-݈gfr\dюqVaq[4h_ 4&{MT|8= <Βz8߲1HE>73)@0C;V4 ٟLF& ѯ&̍4o̥'.ބiކ$,*#?}z[ q-*:턶Hi;n*j&$,-m9h\ N;hSzRmk[2 5aکS֫z"JU$^Z:cܤp|:P5:LbVV(V`-w#iupӵkIpJ2Wa0ft;mY,Y`NfeO@dkJenON琢󴔪)VJlѥqfcBDSJwOHtn-sr++̈>#; jSovYaX;]!16竐TJǦD$ֺPg!C.yO/X[ !R ),cyg+yEKq^1 WE{%a IQ]W"ů;1m"RW^+@꡺|P"i^]'Z1Vʦ:#']WARIbܘtX/We4 ܼ "?op%{xJT'>5v~PyD:uh.p].Zc(3a>GKJf:)WQ:!bA 5nR(6P̓NDR%;ῘIizL4B͏J9b&%[՟s&b`D;^+ _ƛKq)fi flI8FPIX "_PX~6r ;}}p*ck#`!R9g΢.{Cnzq&iuO% "~Z x0=oW%~pQU_BpOܿI=)chU/$.}-Ӡ8RFQ๴Q0FN(AJ;9!>2VFb*"ure-EQfyV}$)zxMbLԁG w4upI]5\9p1uZ lL(7aeKSz 6:ftM7/?Zb_򹝸cO`= LHJ9BPpdS' BY!2, 7S; y7SBO5 dKpK1§*tt{K=nIuIݡ%,1D=K WcJˁQQQ,j^ 8Jui BՑ=0w=Us.q+gX Iߒo?i` &!~Gn " rT`]pìî>K,۽۫iI`f2qSm1g}9ZP}ѻ[ h#%vc3K$3k]13U6bSb\wѳjFR+%:\ Xqb;k[Rm*EˊQ^*M*3 Fp,c*Q?0Qƃ̵G/IU2tTSd;b]o ڬ=|8⠻NӚ>kUKf[{'![TO eY ֌!3JxoO_0JOazٛ㤟5c7GHUp]ύ VNa_]s$+Y] jޥ{`;|cpjĊ]˜^12ZoϑRGgNާyܚ|W)go Epiv~q:2a@vV0iaf vK჊8]Ib\h{r<]`ZZ@cq[]6agHҒT Fי/Lk=}7_pbXdMW0.<M֑xb ^Z$pJ8( >y3V!t8 Ҕ )8NbkDsb--^ۢ,iKmx!v#1Nb**M -(TN! =$8!EA#g3QpLF ⤹X;)2Zv35(AdB; ;\fIMjrsV! $8H ݀" H5#a'1Fb5sDל"0@ , 1U+B36+LTsBRkIX=Y:FX]^Dh0hku޳AY1J$TS.Rvm[L0dGS#,%P$6l#>DL@c #!'t!܅֩t*uw_Lt0 g%ä/{"Z NEL ,$ %žQjR?-i:9?+,q9cż_,P. 奥Y8TbHj4>C]n1]3#lx sQOJLZzs^ :; x97!{5-/DB|4]//$ 5pȌ,d7̘#5sRYqҘ5Եa6F!9ONWoYN}HğaUepy$c̃ y8\|W.ޞ]W?ov7gװZZRj4ayu[:f-nc>#-bz&Tv~e$mIx"SrWNxX院+9E')Z]eNѾ"Y&v+CRŧ f!Z ;[qRbYo9s1$dbkvZ=:oc$$`13+-Ӡ>Рt63(Ju~yr0U=H`H`IB|۵6aN1G-1|ɰg(D ""HQpH]U4ApGt5a}GT:c˭3@_ %^#F%s1D`h xBISByLTx EuV ad.QaB*9%IJh W\@PB Lf ilӶ)Fs"X+Xi Fh]ˆwr"r ?3&Yd '0kmu%[AB%F10az$9 OOG:"As%L-XSl๴qkh1G0J;9!>2VeZ1Mu%Li)p{Ko(fǂr-G610 o*vK)ጘ1qA`܀VĴ gP{<>_~ǷRW8=;?d3,NVԄAEYBU89Z~_^կt[./^xQ%٣*,,د&Dtm5`!2$b c<Q . OԠ#C+4X,d a;v7GkSm v(biƘ˄&ϵX/w2?޸xחgfkpo4kqU ]w0 !_ ^n\ڎ{\}{,m^|8B&)\u3o# -UJVfu Rj>7AMw)!psdĘހAXJ)xO)hHxp0jļ 8'tDTEW`P()uJ]tpeW9ljTuGNH;r)J+%dMjt+ڀ_(#D7}LĜS^zB31Mgm]oq@F" !Ƀ1k(5K91MYdRtYɘ bo0xG^Zy#9d`bCAJBdX*oA ̈́RfRנә 9v`㠝BCiNGRX= +b%\ lFL0PdhT0MƳ`SA pj\s u Q%Pn<`C!bbr">RF!iRxGA9a"Q[歖6 xJc0 B (6 2BMc ZqT1<m 2NSJ `jQkRv V|.01Fii`q,ґtK\H΀\E%@910r2N`IPk)Iqj9hSq#h(9p%#q liT>p֞d Fh %f`mׂsn]GbP(8EsvKpђi4[Fx$ Zг*C[qDfuVCpxҙYttWz?|i?9KysvJUYWKgH9b>\zshlx͍gn$-x# ƘkD$sS=wfZو}R!. Y1JS I)xWȺ++dѩ}]!kAQ^!C]!kA? QԨ`9 uuEU{%x&!JzehuF1onZ8C0v~|p ح WM=˼WO^*?p(GC#jmdZG)QZQMDT(c2jZ lt21ZLλK._!ΰ4pLc>Je)4pX AdQ l 1FxhuL#O9CG0[H Y.]GO,*osx8\&|AiP)w]/Jo7nS߾Er[DXʼZ.]*9-(-bU^9>E/k8܃7n~Pj "\m~.QJpDMYmٱO_6ucﵾpWo> ؽ,*X)t'a]#eu'Y~ńڭ9S6m:+ѴvihV hvC>q-)U0WvKJq:hݦ"x&ΐڭ\|,ڙRDS܋|~9?<ثYw̹9n0n1h:gܻ_x?ku%Wtp!|n[1Wo{cNO?lY'bYtdpn Qtq7 CZ>H5ME$]v$ɸ !IJ$ɛe1Ԛ^u# !$Y}'!RHсxZSIʋZLjG+IBNz%"Q^=j'P]=^I(%Xq p9)L0"%TI$!+&}ԆfNM~aBՔD&EY"ŔRTz KV]( E3C MOhGfh{*SM J1lh_IaDL+)#>%q:Eҫ9I#ba M)'} /VשB9C;lȚ*.f򴑬+fTaLHEfcaj]#SïrM|%..(C X- ,mūt 6uyzV+>| 禼¦>xqg"udqk[Ѫ2Q޾i~*Jf9W (쩞J}s,4zG oAEw|R࠳nqy%KJ5]:j68vQX_nw[O\K D4NcR—L(4%<"ά1=i%v^S;9U;IT=`8j9hGFt^w<{T $B< Z9YL'b09 lM Uk*6jqޠS.o?mu, }h LuNrnSvxs@Ogc?IHu'fc9kX0gEK Yb6"u%zuLjBt}X]s{p.,ʫz?/Zgp߾xՉ=|z^yee>}U(Ox;y>\-y&E&!ǀyB9cJ"L7M=#vZq]),B궽 8bɣo@M-7|m3 ëS_zdg%&vl)vl<nuzt( +g;)Ah['NRƤ )/yKśpE^.@}?^坙n'!f7q$S$q"z$QgeI@Jl ds˱q4-K*DQ5#eNHj=frwCf"н~ʉwrFu8Mgx@"j`$< Jp.)rTJP8tO4F*%Ui;Ja5,ÜCIi8Q!N*XNYEȽ6"wm_mz{ķәv≓Kh@RMQ2I)"#>D>ǔ] V۶!7)m)tNbcb6"C^" bS"LyfRg9eY"E;Z/#GOv&:e~l.4[0cPk5B y( ̲"H+B NjOr3l.Nqa1tο̂h. hc;tBxEu(JK&eCEZt D"lDuh51¬HE'6Rǀ߰2gP-g; &3e E[=u:ɬ28{60i 㝦0i VxRRzn.*[2mc;h&tA (cHJU62ouJ]tFM'?GG/\/H|ʶւ4 $hO= *7SjJ8CFhE#!29b-Z"+u$NPZD c*dI}uF}$1H10m$ AP* pwA)qAHp{$Ǿ ɳ_==Lɢݡ$ V:TS҄U_q6]+1= /:ą^dTQlQqڒbj^="k^y՘ p< aPb !qlj>@p88ZD:+i 7!RUz{xqbx)]KR:!,62ܡ`BDWz{PQD}ãؘ #8J@qǏ4B:T وsTI9m#QJ[g(L84"Cq\@.7ק)l??U$П̻U7wwG8ʒ(8o$(߱w?~"gI̗C0 rqpYx SE_ '(Pf 7lw2Bۿc!Pt&vJF=2'4 ̘ GQ4!Vtf:šҶv @\$&L-?e0&dɝdz~`v;4F2R#/V/ySTKW m&4-3ޚD/duA}Q̖*VK>dkEooM##c~wjQx;&,#r?[l`m>މfFE*y~U~ CT;YB2hJxzbArV9L{A q*IMp:'\T _b,d:E-f5a,?s24 ކXFc%&Qh@(0'Q*jC ffgQ:*URrPiRGTh?ELpNqi)K|L5!%PSyE^5_$Mej_$_r0*_Ry'w(9h&N c$rzrF"pS`,MOTf;4 26ùRT h_YdIޝj,a1qlbg?6x B YGmIi>MwTU221MH5߃;XJSO8?6=s} J.l}EE+)RBNe4f﯇nHA;7XVFvMt++6MYƇBDH ʴo)-̱sfΙ.n_h"pϾP[A:E=.ZQX84BV@LŞ96'+L\QN-^L𹸶Cx͗4=vHS"I(iRDq?>/M9 su4 pH}T_Ir?~ldu|G8E$6X(#TBwUwGW.TmJw3/A|{q}Vx (FmXIi8Wx0`@WKN2FΒ-Y$m} Q:X(@Yyv>Qu=[n;d0Lz[p<n*޿aI9bt7fGnK^AUB0G> K0|a-Yw̲v\n9\7^dx,A!9avyHb2ھMR%8l_=#qP7$!=x'<\οfW7D{S^7TKfºN^ר̀vŐRNqÎPu\ЁrŽd Qe)hw Y[Xjtz2Q9o"v V.g,: !#~Y a<'n85e.K^@U JMOA!c颿_!2XMVMy];K{]uWATO`J=kaug`75FdWHob.$SҴyN{S<TS2H Uq V2-fT+{8%8庋[Y8 y#P]PeF Bk5;SzՌ9v_wPLxu:Wx[u8Ў_>@}ǖ!1}fwp'}Ȟ[=_S5Gƒ+ORZBFɝD>.zCFK>M62OзO4-g 7 {P?]`NS4u9vY gWI]JZ',qr9J:b9 0%~KD ]өQXvwt7|$kmFEKG2oŋ?ɞ &% -6 hNfAmUO@!tdν9HVd$IFk =7_W,9qM8JkQZҚpT]V`J;T`hxdWOH *BD]pKv/i%_w!t -BD́J4W)ͷ/#sqm% ZW;@]: Ordtt{i=B%3񔙽v+8.u9E©jT@zu&}c@y9~ոvqA6Kt  DFVйZѵI (5@b t uOB X)ЀH=!xf~\(8sęZ9AwG!eRVOj{Tivh'Kqps֬"'R_ީZ7qDrb&x^}D5rW9ګJ{b]y\eF= y@k7[y?5W;`Ln4Bf&$R;ed $.2/ELр%ivFQ-#dyãU$dh@ϵVDWi?!i4vHg Ό̬#؏.$-~X;}3|,"9(G!vцrz?,*9jUv'BSvLm/7]]ΑǍC|1v o-2p~&wZA'fStBvHhoEOmA;C{d`M$U݉r+c@~ʘS[2ٹFFzFDU!lȂׄho>gӃC8~|qiŋc!ZZrklݛ5U~'[1O+zb[s 7oɃp}b:}b\\ 89in72у3f `sFڿfF/gx웑b{ǂ}#3>87;P/GM:S3K>|fd&&>B09j8߅qZxooHgqnlSFa;MCK;ekM() ZpdHYshag0} [WCxpY?*;H_iB(jY@ txډ,{^-gV uGD :!yQxQ0͌2sACJpnxl _#+ҁS|OQWBtj Nwk֚$yWs}N7?Sڛ;XOx|EOIAh'_u~ Btn8ogiF>f#O]oݘLOg: T5O#OgRMrc2MyT|\TxIZ[O} +g\$!q]Mn!W U)*7V?~2:xCV[}m7v{_Lc$'Ft$,L ) _OR?a?7! Ip ?*W- J!).靥=ޝ5)1b8G7fq<}9C_~Be'7LСRZ }7RQÅJbO5w|>L JӕopIqFsNR~,L_21\ C,tRb/>>|-N{$BDuI \aM,NE+2/u:-AŬSo-鼶I2dp]ݬ~*ldqzC_e=Zl}8 6իi̝3Wb f̘L 7Ѧu}n|Y?CF4uy+w6z ^%^g2uGu^5XBވ\I!HG|u&EN辫r`=$9iՁ__>(T#yel/ryeMsK^XJ Qz`QieheLA X0`]>C/m1҆SxH] hYȑF04w H4`rY6 *@TJ =vNh!S#lƧKEk >\(fJ/+x^0- ! M3)BnLِ'APE$MYF 8$ DMv"DB5BaJg.l1_64JrC<ѫ(J;صLGXM孁/@^=o8EPΣr'>(w'1ܘ|Igf=)7'B6df&vu>κ=}pc/Y8>iTF_ephK߱nؔ|Ti+{ALpĐIu!{5CRycZ&E>H6}"1Yx q:43CP ƘhqZ &_&xshF[Y4JJQ9c<Z3ϣanfpp}݊ I:_zbm7TrXvW= U6_Ѩ:9Ђ.`L+iY?ҥkmA3 !odaR㻻ۡo=VXvc'(e¡$P璌 ꩤg7i3;Rێ|Ѐ0y(B3gV͋5 " dD ;..Ww 2P9xTZz^d^1Եp|r/qP$$U$wD!;_nY*9cuu-w~ѫg-: {U]f:N<"ڀ.|NwVb[`Ph/ׂ-M/!Ga6ZФ7vyV!:lE+Al>o@! tב&.G0X u@ (sR >6-V}ftNveq2s2hN>ecZmOWmQ(jb0~ur$hq!GPΨg$(u"JEPm)&{_iWOUS*Ϳz*L"dN#LTOmOњ9h뾓de$J,ݻ% y%~NuiO 9>n·V"d2ړCjh`{_c~sxke`jU0M`l>[^mցA #)2EoˈxM Ư)&yD5\\7 F k+.ʅ KjjNX%K9)Z{j4m[ 9 Kz{zP-f,mT idy< Qۭ:3 Nl! KƒQ Om]#TwQrłޕ5q#NDGa86ֻrس;/)kD.^-n(p@%2LR%7CV(!xOObB̂C E'XZ+r$RkjTcZ;CZIH˃  <[`k@ U &F#y&ڼ*Gilϙ4.g`kWտާozS+v&qiwY¤ ~ٙZwNMnzs7}=!N?L7g^Qg C9`o˟_E?4nym.D\:U:|~dh{wV᧺{kw~=\FSCc}$J**K^: k6<x{w1B5˺GRpiF]! {${{<]1 {Ėgd<߻'چ8yhhf>~5khP7w{cPt> &Gi`3CJjwI_.oDZ0X|Sk'q$lsS% H2kWFߺk˻$k_K2^պ߮Zv&ň,yXPFAn r撙YxLkT񳷂%|Ǟ= ԙ-f]i$zw⬫ 5A UqXKMPvk;H~~[dNa{RK[Alh_䘽Ģ/rvC6v'̎|NZ裄lyƑCyZb1e4z֓Q:m-h JGZglG-/|;6Z!ӞքN7 5Q5g6qs(3w!xe;W݅puNƲU.nb]~y|K1ƠfJQ^hP_0r<bvV;.Q{ 2+g)#R13>@ryѬ `}DjΜL_K#ORƹXfo>y 5G[>G t |ѓr&@DJ'r-dž/ tFODk#9c:zt|o|9 w^+nO8^.pɻ^Yq҅EH_Xfܼ. _kWJ0NIc{%Fp!GA9ukt&mB) N wRHs )XFޱ$FYV2aAcJ_Ap $zmF: @Ѿ#=]]gGgsݾyŢ(yOW+,> ـfH1Na؏A-71 BCDJXN^q|^c<_ӉW̎;Wj_5`afjH~ݢ׬kb~޾Y=|xv{/Z""Eb?WOwC|V!gw.[wh|wun UQkQ!* wō.P&e 0718oT7 xRЗ64=uQ2 6(ƨ8 &d%ϡ.``ۂoK1I-vCXS`IFl€GU*""⚔(B䌣AV]~!VpHn hrKW26 1YoTyB˧.VnjTX3cr7]f|m/zfr x^܋Zݭť [x[Rh&SwdB1MWF7vǕhg'ݾZ_RBmy_wKy%0;cn"/j_j6>|5*|WiN|z_yӤ ȎvY7e3v'slw"ig13D^1_~`RIGDHq)ɂU!V[8Qk1(yX!-PPJXIG(پod;f'CϔԌ9}rHȌ0};S@%c(D@&>r7(ItT 3NOk))HhVS@pc#B)-6*Hy_."@4;\sQ$QL{7ULtE^m8}:^rFiHDsXG* @1H-_lՒW;4 #uIGkz'b|DW@ ׊S=zS- * hA[O$9Z;FZ!)͖.n,vI|?N>ϳ[}lg/[ÆD"޸).P(]ˬCk8-)F4ks9ZsfTZ&h9pwҭ'HQuky)LQAi9 5kUsɀnB B<KE?⅛?n<ϛnyj!9j.>3"g~BI'9eJMG>ӑy@k)8!k&EXmF*8. _mrUJ˖%GvХww8=e{?=[Ʃy>?V&hg62n0kq?{q] _pa>WB%rdJ071#d:u ­6LO-z],3 v%>U|b!d+/ƽ/QXำH-6`<'զ3 aJhy$KJDXBNP>2vISs A:LLtbtARHu>Mh<e .xP'b}M9F D|@&֋`{yyq(+JM74DsÕh%nj'?;XD-:R#}m]\ymwݜ9Qog]R9z8G_ٻHvWw@5_<|0> @f&t'i0ݑ%*Lz>4cX6^]h/+ylFEߓyB6*^_7PnkF~+r^3noG}{~crb}!3cF>{³Om1o{p<(!S:P@p} SfS+].$婵̛AcLg̟kTW77nso|uO iO-iQnOOc/"6>դi>_ Y°`Ȑ"*}FOyx .'(8oiYkJPk ou0yDdC*0ףh4a_y8qFAp_߮GkX?+ɟu0^_- :Q8]VwXٓQ;,UpZ=./>F~_l:IfI(x„Zq.fA% M rTj½Sdiy9'h2Jmmb3Wķ?P,GdيL,ϒZCc UG!;$c 7eSRP9rr.Vw9~g~xmΠQdP;,$9f `T M-Vरǥ5z@WA3o$Ö,Q Z߾g:?: mhR2eiD WQPP yBpN:ri۵u1Rmi76G48&6RJ %p]pC pQ;1x*TZ g"{m_Eja5n5Z6nc- 2Ą ARJ!~ڐ%~⯷Zd 6aOۋJgXLmƣEL[U,S3DJYr+e%/,mQ{pţ4wzmx0O qSvJ~g׃Hx'ˡG?tq ߟD9CĮ=fq8#@*6yjѰǦȬ䄯hȨj Ern(,a: ^N./io_:%6!O Tk*Q:by!RU-u>5:$7 =X0aU'|\FCynFZ澭IZy/ç͏OVNZN_e2>܀V/ 0R*K53wcƂ֕RʦqVXdׁ[P(9 U ﹬ kB2W2̆B jďъ֓\ ̉F Tj6  Te(cMhpz"0&ViT[\mC_" ۰75:mQ?mlF$1m4=m+X+!F` h&7e5;e\qq4>B*˸x]OFw%;ЧHA(dϝ**[ Tiޅ!ګ oJn.^^aHP5xtmXׯiYDzH@`eOmMՇLm4T&TD3X9}Y0Ai8hZ.~aSږ Nݴ.~dCjsI8ÓT,G5bۇfjlvr:[e1c P&^ɰHEA#Hn-arYv\v,w)7Arl44hJN=XIAph Ȅ5x/e-nfIì&$ׁHSaSWQd]$f="YM#>G}= x.$m Ί(Nt޹U6i1a0#ͫ :fKQDZ'T"VՔԌ[\ta9^>x2X9P3b-}H/ȫ<{=2 ;X?Q)&A$ FKg\:^rWS (e﹋!DYOrUlˍ7M6+hd=fK? vLHN1ATIYړcH|v&pKB:BELS[*>w{= Awj饡T2Go`.Fѱ^+GɁ1vv荙@% cy1l9O0l 7LLs +c %߮xvn?c3ﱉ \l}>SfŴb*),Q}-vٛa<0GPYvp9^<{F߅$8hQ8w? Tj0E570RN=ܦTA1o<:ʍ̻ɢ )NXd5XLtZNpZM7\6E-hNH1M& JuYK-#:7<\] pX8c[{^8bBVF cLJdc1),<3CeoxzqAQdh0[qzTmFL省h4FK*6?f|6E&7`XNՅCpCjCH~pޭ_.~ԄkDBn㡟pvY}M$g?o97~ﵼhᇬ`ve;N0ޞZXiA{n<6Fe~ԹWXZflgN)J({4F< 4!@T9FR7\-@zke4ª6icpzjFq_+}vv'liR<"5V}*x hmJ)ZOcEvWJl+ 5+CƢßxCCaj ytpR7P4b49LjsQ; ϵk&^U@F ^iQxIr`; ̡Izl_H/\ ЉD? Ze7u2ZY̮<<Ζwn[Eɛ<׷Rր'e)\!#3p:& PY ̖mjwUg8yIN^g@6UEgv;Le4I]HfC2 \y 8p('NWDP T6(E!E#ym1'LDN%(_i:ȶi-O34wu1'ͣMlָCpyg пݐqKn7\@$8 F)۬.Z˵Y]]lZAAXj8^IT(܁(_vA8^ju7qF]呮>ICu'QG*}K9:-{*A:/w]Ww{6PІjR![d(t lg1;}٤R\׼[5W->"[YAul?Ѷ}mO1U۔ϋJo.U©@,=qJn '_vy(E 3؄z `4T̠E:ɩ&&?nBˏc> UM{urm3G 2W)zwyUH#Ɨ[}o7"x)W q/~T&0T@]]s*ScD1F̦G=%^_/GC9ҁ@# &u]iކn?L[|Dkg,6{W{M;r`wM{ǂ76?~l%X,t6Ѳhȣ>fR[j;T7Z}cAUW%a[[jQ;v3K?ɵ4RmceÊ%/SL}~g'[ed0} Z̲Tmʧwvo'wh917Hmď7P+m '!1zv.1f9xj|;1'y#˻[݀f69󙜀VP͞1FҰf+HVaԌ{6%Kv/)~O]^a3C{\p*޺g_ZRQ r(dZV+0~W]ljͭ9~v۪̈RAkMQʐVıiNi]Cxm1Ku4R=pȤ5j'UK TJ%Pݖ4{ |p;8%]|ȁ) b/Z@EEN:bګ]GOP~[si=o.2\̭#=}ӝ ͫ`xUg Z7PE^T *?Oh.dЩ o:m,ٟ8+g٥aUP,vC2!-CF%saf@A 0}Ԕhxm$;rG\Wp[TA:5M bt=Xx*$ʂ!Tfs|JiUPB0>#`jcȥ:O$5 |'Xhmj=MoS9NHbR컚FRQ٧3ۑḦ́,oIB5Kwu+BHkvjWA#0"+ApFYΌ<:̥rZ-mw:N9rB~~*d8#rQIQz`Zh%w皬Ztl'u* :OQPqg*K*sYڮ À>\o-O"ܖ|?{WƑ?\w)֞#Fq`$8WDH$zV6+- riW==驹Ku?كAM=b -s)uN ꪛ qmSݛAx1 pR]xC-Ac8{*n>1oaqqBk .Z7-<HV q %kTR?tæHJ 9k-Sl6/}OoemF:2އQ[ i5_YfK2i+!`z(+lܺu}/0ֵúǒۭHYg]}$)NHZf A@Eta{,e;ܱ ob&UaBxxmZP]lӆ5H~M+uIs(+n/PrxbXEdD>1Tk,U-6EP.N7r afIiy3ⵎ*FΈjo7;55-%5CvӒ ɴ6]+w 4MNC$8og.Bo<_">dkowEANhόyq"U` it <" e׌Mz5!\t\EFa#G t3JQi}sTMx Hk_ ojyjZwBb`>S5l&ܣB.CE ·y6Ȳ)+°b5=wǦsJ84dlKi,2LȷuR!]#o 򢲔,Eeż2ҭITIXcVtכ]]%Rnj8g3%8ό!nS1`D67bډ70J}{ `eg76/_! ̋Q ӌT(ӌEm0f6 ; QBƪUiiQP̴AQm1c BL<%WXHZjE뫹Tssmeq.H*D_n(t=7fX E=l^Vz/{1.CtEiMi;*BA 'ǿ B1B2b^=&NՃAEl5 BH(EgQ;e\CEC4FHU UUW*Y+Evq٦Kh$yk;!\%F2)Sp IL4*PE!(l5s1)<+<+_{VqJ"Iγ.j ϽSI[r!n%^n]s*2{g2I^?ԨLgѩ!\4U!䰅0{ ЈBw;04\I8J1\/f30jPΰ}+v҉Qlrh6UM->}6O^6?V_p2ȟ{p:998jy:>ytCsð|6aٓ:Bl>;7nʏt/a8{ vhƧ y୎´zdMl·7fSnU81銍q3wvE&w9jmkvv&wsr6l`wv ;ZxphWW/u-Qs(yy4u׿wF7 xFF?5&ק5v뜙ާp^+yB)*9Bo:;=3LW"۩Z3PW V$@UtΒi'&i:Cِ.(N߯?n5?u/y/Ņ5KsXN~^@GJwl'cLykͽF!1ު37gK'&N3=+MvOnnnr8\@!ZǃA33 r[g{G]&s9S?3Gư"9ӝJO{=A.oFw]lL;Txeffsq6nki$gZ]x|PTQ }K$wu9#DN/OY=rmLJ3mq#|?j͗aGz2u^;N{{{EԊbEN62ĵ bt$Ds(r:-p 4jJjJje 78P6Sl|\ h\(os4'r SxuU~}Y(VrIMq %Àa9JhhQKd۬ qڥx :_^EL_a)' ᩗ*;SjO={ۃUy5?7^k{ٸAbXPR%]Ϥu?z+t]-*HTPX_37\K̃5)ыA1QR8L o _;5 >QL 2$Ɍ8e_68G؂QE:fv0g$E1ð0ҵ)ҬDmE%jkg|ZB sFphYAjű L 1Tq1 BX3HڜQE뛜d);byfe*KGPkȠTpewi(JcXzNbm"o)CŏM6&D`k"~ ^Pk Yu{ie 򽫨k?*x:ÆʌQf22lTG&/Uā`CBDJ2-ӳX iaHzi=1X5%o_A1TKW-6wI5OɔG8LZF\R2f(͌'9֖aE% Lы$ nxZ$[C/^}&ѠJeʘY|ҕ*}ERy8a"LAJAAFlFyj!TZELZcI R%[u]*m-KkI# Z ԕ}7PyhuEo=¯O*1wRR i|E_&i U#k]RYTH)fJK%bR*5$V=ҾgUWRVy-V`:2tE%/ *Wgi!gi!V\_9kknFE *?)ɭ<lYR$Yv}eQ$m͐Hȋ#r8h^J `*uœUN' ߥ}"}1(%p4<$'".A5[F_eY_Fg*+ n9ky7jy4å!*ĩ%̌y-zU[DmsD`qq+X[iiYwiBCY#bkV]ͪhznaYҩZ20}YcjSVszCpGQk>p:CFvہЎ%s)L qߵO9`lmsmxeIБZ:@#zC6b}K"$AJ%[ vwbðjAw'A?G礛G69sOuȤ:GV:j4%&eX)2IMM BP)e% [s=GGr) n]0!0]MG'Udߚ"|ȆG~vԦ͈ -"0)v%";=j=~qdhJ525MQYLEdNHSkX3hڞe N}jtUZѢ` E1"_ Tݎ0 ZDvw|6t؍Ǎ\%yζGx0j=RT}ڪ[k 6a2AQϑLP}*GZrT~Tp`'gGɨΨ^{\P?/㤪䓃34 OʗgmYݴUB9pr:~G*y+P۔\ c= Z,"Tֳ !cb~֋8H(+'Vܧ~;*de+YJVdÕ7QִH}KsRV,ȁfS`U9|qsU+XUriq+ .&,mLGYd/ܛe? 0BMd Xؼ;3e8P;T4ymi9Cg$T0v+ei9 Nh_@Zΰw6--Ljk+-P6d]a^ kr+c D-/gw|6S)aZܚ#WY笲sEiɇګ[R3fMf(XSЖ:6xZpw2siZpf%Lɓ ^[CPIio$LImOJ!)0@S"@3S~Si9kMi99scrrZNNi993c,k+Tbg\,qBR󺃈s6.I) 1E 6U#ݳ8kMyld{ҶrBavײO19)3cK) !Z'Sе!"CNkWf;I66^Okmk'@Y"JBbkbtG2bJl6Hl؎LOlA{y歹gke-Gnw-[d%&>yɋhPSqhL$WՊZKUX/%c98cA =`e.FL'P蘣R6: HB#VV 9&4^?;o짱~iόf@ Ơ) bJYOiT=%wF*Z[1 qΆerp;d.kݏ-@Nrwg_ܳ-Z1 .,m)f DT"'HY!"A^_ۑ.4E[^?9j~h~h~[( -a[J=]',.{ 8#Fι]"Rno1z$=({ S5{=ꉁ"*3Ć'TL2P(j;⬱$2șt !Ɉ"D S4b4BH? 3=쏓\9?{S6YbV i,,!YCEBTwi ajz{).)k8hq'.S85S8H{5=䙀Yd!AEph1P鬺mq-7\`B QDy=*!Ry7NHp0`{>鉂1v'~zv=+n4@$+>j dI2(Q$BoCh|YM|$,Z$Pc@ŀec@5Pc@ǀX' QPu6*h{6Φ]ETԝ0MZR..m =3[]oXvab?NMl vav7:qvHuHႲ@]1޲Q)^ 5YPBEk8;ĕDhq;M:G|8-J|G|G>9 e1LP5 w~ fỲ:#kLQc:٠b݃SKA˃<N( nkc z撻mE"Cu+7gd%1ug5kZ}&'*P^O>aW 9=fwlO6a- ΂ϝ:C6 @~;9cvF]_XSyQuW: _?eu=0E@95xcD?x s0E.O@a_{Ƀeo_%T;q$Wna%X.gg5VI5ڶk*s/9pifGYn\Y1giHCzΕ, !֕T|KcV Nf-Bʢ$AHlђ5 en̍Hyh-sen̍<]2c,tI!]_yAZA64Emnۃ%Á A{y͌3 ՛ߺxŜONgkQLBxEh'c"X[둗z>hy^*V%* kB7ٓ ]L&E}ݿ&\oBƖظsNsuN1O6'P~o{vS.O3ErՏT ȨߍM[IʏNl]Lё5?g!A- Sfu qjg2O<ɢDיs>^^wch*uI.f9a5: `t_.4˽#uH`f'rPܣSoCvpo>h @r}JrXӑ¬l`I1$(R,@~Q[-4jͣp f ]{'뉿YLhT 8 QtPˏSAAfK"1(*H%hd@oD d ZCzQ;~L$_2h ^ߞo+̻M3"p1`#$tջ7O)/,]=rMX43_Q/&"7lJ7x!Z{& "xS2)L&V0Nu:uGx_X_7Van^>=|pq/Z@w ^N_kL4ՅJwܑdrAz;IKxOpҌX[Zg77j;~7kA&_3>桽9zGH<[5~kޕ/.>L''Gƍp$Zy7OƪG"w_?;h6UP- Օ l]"x{"\` рmB9l]K=Vͧi._u7p'm(E*;_gfcC5iòD0^$bdN)uk7(,t;akF,:D[0P! rУ^AvUl3QNa7^Hvc29z-WD #CMx^>O_|w'wޏ: ~=w/8כwoݜsvs@Mm7oOy8)_N]沕t vn܇n bg={Q}/OT^Uݛ;<>ϱ칭yq*n& X3C}=7+>쿴7h3oؚ{!,>h0}.6Mrcx ck9MB4*מA_HaiuǓG7]}֤wpʀd"%swWQ)_/ ߻ڣA{>h‡= Cb|uv_{6*ǷޚsL =8"054SJ[_o1ꁵ_wZ lzu)lE5w7<9k>t??|1Й]0t w==|,>ݖ{ vv`L~u:Ż'Mߵ$9Q:6Z)h$p " r V=K:w{DDH<10V?S'ş\nG(BɵɦST*f\e(ՑގKT aҊGa,=GOi8o18KM2)!$ %; HÂ--5gӜ!v(ag͒7*%w ~lcb !{ҡW2šrz{|2Kk39u̕4| k3Iݔ,~|Yʇ K+ vt^)Fw>݋]=p9fC*I4ҐΠф+DEg{SeMm$L*HYD3XY싩 .2iEN8U@3´qYVM"nGô[ ӊ"iL p49yaZz`آEoh,!ܨbMkӨ)e"zQ7d\`VL҉ lDXZ:Ac ;.$hBXK6\dIHpeR(C'xt02 V|8A\:ϳp)GF~ 0+7w[O?5 &Ł/7s{a={,ޯ >`7A\zB-At{O%q]) .-d2gjW[.Tĉ=;^~*$\W翿Vsԇ뜯g{W:2`mE\MBx3l84c;׻T!%`1Ջ #ɯ PR5aE^NaL{ Frh`GWbx (q ĚC kqȨ!W)p[ @v ]4x!5(1ڮ?^ a5qrF}_ M0$:!S@vJ-w_SI1\[,"mX\>arW[5vxy+Yc]#( p5X/ \X8?dcD!5?'2N4C!@v8Bz3لZ}PbˇJ*zI4 '1M5:3q%+h6]fJ@:ň=,ea(׌"݊1)dE ͐#сYF5 u{G%Aܖ b|׻"ވ֤Lc)WK- {N9Z 2O@XH`ૌۜC(nDp1Rׅ?ĥl0 `y b8zZAe UwdQGM+\eE V!f *F8JAיX); \뙱Oh38THv<"nZuDT5SӚFRؒw-l جZ/l3P )wmbXCǏ5frē*дD谝-pt;[b[cGl F̱<c=?d8:Xcy$vV~$mg߱wlQІUv.A8*, %6iCZ^96^slg:`(q csAyޱE=FqM\Jh7{й &k<Lz+(1w^L '%Kc z`٘cE<&$!8㨤h0*.En&X2A%XFhvAIYpnYqȳ=sGe >Z 1D©p!G/w 50"oƒ]$I4S:4!DF?VƎ6Mqcآ!L4qX(2>zІ6<w*4U;vό 11YB .= 1:b[{m~"D1?QgAqNb< WRy%^r9L 8*%o@YKh3Ěڪ 4yNpKrKZ YkquHs9"q䶲ț0m$TEّEPG*2dʟj % ڏVn>5- jf~x`آf(ICYUځ&FNME)9g  4cTdFΟ+bdjF@c0V>"aCq(gC#eu H(NF@_In:rGtҼH{^}t$аc$O_,S4{ŹR-dFXQfr8'Ku˥T66jsN=p1M˥`&%d0]ؽ,oi&໷d&;Q*]0%xQ*{yۼ XU[x<۲FkW R|t".}M=F쫸MC`œN[`5 6ΓX {B@fEĵV %}UYU4N_H1j}=Y$ώ4/dEaJnz@TRbʍiՐ Zck-nQྀ+hbPvxDX\gvdv"'.zG MޓcOd4<{E e =[ sc袺r|:p 啗ȸ㵳Z:K%̊zxX6̀%xd:` X:ٵhƆZ2[4Dl0+`uo2rxB%F1iTױ̈́k @)ܺFA:^:|ͬ%qwtQ1c9&#kFWhu֪ WqC}czA";8o}Ep0Qꄝjd/)WlΣ*5r"<<` {-qA/[Ndhރb$ǜ&JTb-Bb)eqG:d]r/ʝMՖ͋uʸ@u#5>0k3ֵ΢K` # ڔH[;|eV UJ,f {ܬjYč([Ay[Y-g"u3[qU,'82 YFajbI#@xmU:|p#nj{ōG1}Y;J1S(vk,,!-9 &Dx)xAэ})ɽ5k\vH? *;͑~|I›Q1.ޮƆqOK<Պ#Xh04 r=/l@DPjpyI">@ k>',* S5 N 3ޖ ?Y'4RHg9ZDQinJ˖hMYIKyy>"w=^*~8}2Jd$#:wJ">^ݼZ}Mr)8BC&* }ff.KHIX=~3y^բY}`oE4\՜vϣCyˍ VH> ήC.㌀U2$yZc {pׇm9C?)r1cZi-EGrs-;}ח_&f39 bM3/opDךxS2S#JhISaDr0&֌OvTiEWxES{N(6Sa#bXRƽP޹=JQ#D5B7+D" П۫U+"hZX;ŧd~㡰f0Gx ƀԌ0d#D I3wPqfNZZZ$'RY˳o ?+!zV ;j. m6zt>w+ŋZĽp'm5L1V)zFCTbuH+ ʨSB*&rE&gX`=aiA* eβlRL\E] [GrdBqzW )ra C+NÆ(YZ?;qjw޸Ҝ"HRĊ16֐,51C?6&EQu_#2Th݃(.{CTXJX@ppG5 >GBie(ódwb7fe#u@3Q]aRp?^A.JkdQIF¶dx& DQ?# EHC)x;RLH6eD4$Oe!"}aMpɂ=j=OQBuwK6GWTF UEqK~m2"0uBsRiZcDGb'7Mp -<$; v|gr|l &⦳6 }ԡ]}L!+ċ)"v[yML ?S7YW_<\&U ?>ntAGXۇ/l/jORWE9 r,(YQQgis"rJbd. K%XiqX c&O~q $Y]W&4zhT$|0ļkɄ;7h^X,.lacSy𽧱 &k Q>[o~IDd}.py8B$:ױ )FɆ:#J>oD؛ݪě>Eb$2aoU75e7Eza"(7+7 ~} R`jb 7"J)T Y2DC@*@J\W7?iK%,AM0@`7Y.4!!K@O b2oj|t_PD w`E20M&2Y2RCΈ9nRTج0Q*MM$֜{5gF'jҿ0F?cEGAP&b ..A$2 Zh.`(q%1a]D ǴhD;7?lFQ3>mǭQ&0{}FP"w<:CTtW.Vdž'`N ʭk,2 Dcp) |2ͳRϣ 5w5!s2  s9R-)kKΠl}e4'JBUM탵LmKUm5^{?9:B&w\_J})AmBskQMbSc֎0vD9y,~rV&;J7|_ l y=EɔWIIr]JYtITrpu%KeY}*. bkӿr+:ַhsԹ>VՖA"KJ KbMde=Ca%n_]"ݻ6.*Z\qZb;<&k*ԃge~IE ]$@S+r@S:A[h3ҩRPmP]Uz֟$ w0Nٔ/6f'h(?Cr](|É݅Amt*){iTH)_-`Йo $p3Qy1:3|"wO1f16)܅-N Ѣ ʽ`XC7~tC eY*1BgH -&5j `KAu`\PlPI$WƔ፧[6 !llV? A\z9)cB"G%+NReRBP=Tw ?EE=x!׬mv@.A 27.49w`Tvb|umhא?'JQdXܛLKOFr9S<́T+@t&}̧| T<~;ufH] W^!nj (F"C `cˍ8~*8Ol׮ޞM sM iFH吂P")dƭ\zJo/*Jv_ B:'JQžU;R1zJgנ^Nf)H+19=P)iu{œ@SEh%Rg<貱9 U #x@R`s w c)/T:IG^5t8V*CCcÇ3ˮqpF!.Y*4,OagTgjiŘ ubN4:pOl? aKv .=cam&8g "28( 9mDl3baS \tBLT NP#Q{(4.E! ?.XÕ V(S5Qk5$+T,o6F -yN`i݇ov[7L@s5_R ?=L.NF)c9JYN!)JXkvH^M?'"?HbkDp"Q ))^bhIj=̅͠ǥBi\*LkQ)2"l[(lTXω޸ B wT,7`2c`g;8E`20y􄷀DlF@O?(Oofھsݿ(3& !q!!vp秲a|2W'%G6L_y˓e }q|aΞd/W |24y}a,"P~V,BfJcu/bsf0frXb*H@ٜXJ#9%'ϓluݍLjT$񚼸'ޚ%oIr4e3Y\ wE26{$" ܵpbzIqQ{_zM\M2$_`]/D'nm= ^oB|d_~~%7^&xnbp3BcHXyn++ySNs76*LbrkaA:#@JĄ]ٔ=2Hf+p pT Aӟ<}ߪV yCX`q6 cACu^?fڔQ+F/S̎NIC ܊w^c"eSCRܫ'쑒H I ˫3Q+eB䳟umΦe oDj_g%'yKP >"yf`JL$W C)ь)($PXkNZY( |+q⣒9 y%i1v1"8RBcas0#ݞaN`Ϫ,'f!w! WD6AqwDDꈌ$鏉%}H-IH iJP` :i o؈)aF^nyLy]8#u*t{`wZ |Pp.}AQ ATXOmz^I!PLL'PH=`)V$eXWqS Ӛ7X#xgnGe_ Yb iUEU*ߜOG`59{̪Z/s$L,ndo#@\(-qa1ba8`H"qa` Yts{KCbNmƔPJN[s9&.99qDTpv[ IF&Nj |ߺbRYG$+)A*{(^jU=S7MĦNZbƒURt^CJa贴HعI--ekb(DZJIU;OrGa=wwYտǟ8M~acF}ٿ}S9%cyL|L ot JLNSJU*eM%$4%!oV0$M#zѬy7@V B8_9֔!24RRzxK?̢Q3w靈'pq e]0t|0!wp' .? ԜNsyJx>C>h<9/ݯ8s! Bbea1߰ `+p<޸VIsܗuW iOӢ|y}iP#DjϟF{xG,ƔPRcC 7_X'jFl1EVbB7cJd-U rt[4RPG6*"3(.kHex)-$|+@7|@n )l_3 8>^f}1f9rBʀM[\X҂c=Lyp'm{Z&K IwvǓ)|Dh3î}Ntw%*үڛ~n2LnxnF-ӆ @tDQ3=츉nOau%GSS7k6 Azb:qQc=ގil!:VWgWki̔`V/m6M88OfM-@ƢtHFy/? yݵZ?~k,XpTCi@?]0 T^1l>]JOjZFc|1"_aj-!.ꁨN.+ zWxd0K!**GRZk7Xϗٺ•R;4VHmF}g4fk啲c$PEu˦x Be#+K"xZ4#ЈO#9BbY]˾M?S-̶ w$G457X1Fõ4ɩ<.Jo6[Xџ.m23Z 2J;S2U%4gk_C볛S5xP"9hd}s0B8Lr5RɬN݋ĩ; "͉XNT2?$gEpNG,; A{-9Y#Ad8iTj+(볁P;rBrr*ڼ(C 6mOĕd .•@ .8IX2ECڃ;9Q9F` SrړڲN*FP̃AJ;]q(;b4-u";켳!},(id-ci[3Nae{r~_#Q_P`Ty^GzQcLP UPIfZPL?;WF#C@ eI { .Ru&Q-"0Ɏegdc p(j 0:-:YQݲޒ@a%6u/(Jv_L&]^: CB/ M=G$:҂]n+! =H+D[op7ɭH3~{sXۇsR߁%)_AZ?諲T2o!b%2U@%z#LQM)(՗Clpr&"⒑~t%X~۰$M1HUep+҄Hv(z:1iBAx8][WVH@^i@pz!~\@nೞ7Ƴ۝w(OOG= Μ?V8 >PVDB ŗOf|BN 0Ռ^ a(H!FC= ӂ/!f}%_?.G~8e;m= ht=Lº7-̆n\e`h??>?2w|7#w?r93 ~ \05g/?]=JA?(UϾ;+\OH_|7?iU%1 {#Ske \\Em{1*;.<}]v4&SIYș6Qr;ԕVxiVoaI5+5qɩ)-i-NT?/W"_;PRp]xIiL!V:|7f&okd0I(=F8CnƉ"aCxY;o!?1#;Q/H;!]+ۆJ[$p@c@Z4IQ:z C'|2jl 68tܭ_+oњ/۵'t~/oN\B :v7)8VȞn>꾄<]y߶ ᑃc |7QR~_myv-,A=YeoϺ'7QA(hj=r4k$}Ba:)PZ*TY%ޑMKK\4 5 !.>0W"RKGTS Pg((`<a `7e2rN;^[A`@B₫.7X}J+jZ'A5|st`'$`!"tg"iBNЩI~4ٻ޶quWࡳ(x^m;s7 fk[o춹A(Ɏ$YZ ER$u|P2jd_//6ۄ6l"t.J-[~FͽW] ?<P:ĦbNڑtʭ7،\H[;vqA6hngu8K_)nnA< d flڞScm6|ubM9geApXH>d} xFԟ~t%l[ c>Wd&p3:2o!^Wvxj:7+rꟖ/Rd Ffugs1?u;C4w/*zaXm풅w~u:~k J~x!}2矯Pv-?uAovM_ovOyҟvw@ p٥Rɧݧ ap2.ywex7OzK~?Nى̻:9QAΓ`h^=; 8#N@;8%LUo&@')sK j'Ő[d57hp܍f"ŽyUxb ܅ /b~@ 0b,#%9SײBRΌT\J.GׅSv1Z3[L8[C_`׮`QfФbfel,\MgVu6 sVm*Y;6m6)Il~ ʫa S{QB2e}x?;Kz4 1ocjCzTp}pxهIAɌ<7,{oG<}?G@g*C^dzE,pJA@1aǣ`aYܙAe+ru:O䀢+?yNn<@د~4<}}5vzw ~xmi8ӯT[([.$}>wE5]/vJW ՊV}kE_+nG>ߊVy/~{ɽfXYtB 9#BqET]mSؤ=и~&۷.[ Vl{>[?\~$xAD^P:3 i4R9oH-T5iUxe3T, ;Fv,a7B:i=Gc3lU (>l {ټ_TWڗl[P) O|*PmoM[~Me}(a t*I~*,cp b 2`ir@Bv~tˠnlo?x5*aq(Տ$`QQx>8>b Ȥi(rh/~ٯz_ PEOHݳy9~zӬDžqxK Ie༧!!%ߐUKpOr䶗RHÿ+ /ki,WKLTY_a_}~O<#l nWP+腳y J8Rm;/O8^[]>~ڽLnL3H(SxtJAmCvY5P1H7T\oQP VP\*(>y}< ݗ]}Rjc(4᳊!ՐBX!2XMxgY1KUfeNkh$|uETmC-JĽnT|)O%I\^ky*(vt[)B: 8}>L$p  p^ρ'Y 3 @<j{KwXK[H~<aK[ߒ|.93A`V#bʅR4d@l IE3%WOW7ië %@ш$%_$>,/aj|?q},$ ݗ={mw`B)Y}z+>Ee쩂ïL1};q̵Le-8ӗ{s ?~u2F ͌T6#G${A5说)oҐkK篂ly~{ɴbB KN::p'KWppJrI5O]A *"4BmE$V: Q,x8n^qydaj. g0;/[u6fFMqFj30PI͑E@82R`$wXDaLNh$ Qtvh\:-[%M<(E Y0xV,7``hD\{t>X@҈+$:tdTZ TFDgG:g7|W GYVPdm9z˝|cx*R%97R #w|8B!J+/@|!E?/Ƥ 7]\ZTsJO'Z2PDN ""C8w#iOY5JpbqVA[_g`rPƣ~k~ϸ ҧ [_Ԓzzwg@VD3s3]xA0qn hC;,Me!4YJL7&OƩו sUܵI)69eV2f>؁gfs-u-2N\}/ct~Y~12m'O?Ś +t[z|way Z&4<&w䥋HC 6qS'RMr )cGDkrH.j(`\Oᑯwb݂Włq!֙|:` ݟtw i v $ܜ)VpyXa+6W=P^U) req`Z I;]Qd!!4ؔgP ͩ:jC 3DR|4DA2ʁ{!غ5-}XRk;/[c|g5ZrFkY%Kֶʡ0,_/?hU *Z&RUZ삘X9^|i,wʂ(UAFRx$O5aevPj1u5K GW,5h LrFɛ>. ITOlJsFum>3Ҧ`^EI40BɐyiOG{( e`w*P^$O2&~zɾyJYr}~ov*>o'7ۋe`Iu$,ܺo_>ܵn7=^^e/K/fM/[<W~'<.ÂHUFѕ>m^/yAӆ7i')B&71}z@l5VVfW܎U=ۖ]ƌČČČ3Z^v 999G[̶VK 4G:_XÖ4(ɿ҃PZ H?R&XFi$=ϑ;se4*1IA PA;JKyyb&@ 2>ɜBJfOƹm=ӛuL"t=DTb=nxC#2œ K^NPq @1cP;$%^X%QF {ȊD.K1V:$2S2x1SOedhR@I+'tziXs|KL"f)T"$^QJ^)!8Ʌt\i ˅= uPz:nc!,CiL@HF4O^G*gu,E-A̩V1Z r*Eԁ$v1 ,e XBF匝"+=0folOM5R5Rf^>W"S1JmǹlY]Ҿ>AG񸘒E0 >ev~eBXNsB*6ZaPB跴MrS1{RZG2-ZD4 `rp\E ܢ)MZTqegyaPhb{嬵 #NsFSQToT$B=RvYTfJi e:֞ H&(rM2nBnus3T8k2 {xmcnjo2SDNNVƬA[!U+ ,C<PY#nK M Zʖ6 *+v2N 9j<|9)[M^\ƺ- B^3J)MĶV6 lmEJ*Yx&4Z&PTVRe#MjNٗ|w[3b0a)wyk$U|ztP-TcPUϧ,M<7() C4G/8$N +^+̕KPS. `ѳÝe匉6EVۨ9Zˡ/# !U U:F8:DzKlsid}iRcu^8 =` :TX9ii([o-"vANRŖ'{+1ɋca^${SΖR(B{|oVJ@8H\PNvu eZTJˑ8jT] s\͟vS3Ayև^?{IF.L:QAO偋M{.Yd]tU KRxr^ꎄ\ҧ{79nK_'|fX{ɩV̀ $t$}/Q.xT_}w%sm / ]dEEf003|/Kx{OivQ?3F[ Vʑ* F] $]f%j{dž)?UuFrM7wp\|ZTbq\e\мYf-U}d{ZߨjhhUkv\=CFWchN]5Z~M?w)Ҽr6>5%QOSe:g}PʴӲj |)VUU5~>N3}|g9v1Ngj<p])=k"76Gu"~=) ֫i4ӏۮx?NzPz$=<XȟrPLS,aF"͸/'<8V/ف7$]p7|4??o_iZSrtZYn>m?B pkZ~HN>).~J^ 7>Q-[*s-p3|==Qo崙.JttTk.q+T DiU핕}[[$d!U|hk?jCh\9>V]G6ӖP[m }hHilݹ:cau1i+ocH*cΉ*|]͂Nhh>xT 8d gA޵TmZzxraW0Wrw7_A?4utBvtϟoW/]rrW}$PӒu޻} %яJoYf1^#*jTq4r!-2t),zrMX cwː Vj>3*8[9϶3r1+kyYׯofHe*w=x,râEJxO_śrRݜϧ^b 0c]ZB٩%l 5K=@W 2"<}^Έ0!W<_~G4qsˠ>gl㨇6mS&M7" UY<+TguT T{d9O%}&Xs6xJࣤ #E6χoo'gE3#yz{c|Xxdwepixd~jr9\ٌbo E>h.YmxCnfZ/ 65kq`Z|A5}g֒'|֒O/[Kڵ/[KZ}mkɽC(g)J]}aaƳӵeLkƳ.JSR.i.E`JPrV8Q 'YD9sԋ U5 ‐ ,WD .uAOi#q-=_\E=6m 3GBK0wI'ATS4wB q["Sc fPl!DFRguvBIGנ6{'%>qԿ&[(l#*LC%Q,́?DG*IQ^CŀN?{W6 K/wNtƾ]]LHnE@-ZJʼnEsp5q0,=!Lݎ υ^39X =8hdL$CHs"!\)mB$Z8>l0Ip아N3J;, .\EU Ǐf {;-5rΆ2I!#m֎<>EV"9 :޸o?vP+H-!pTAJ1],XCWEA9'y0{!ڗOlnujhw cۋ &},;3n ,x悍x1yqN?pͭC0]S7`|᝱lֿ ~b%Ȁo%OeO{z.P9"*g-Eg )B%]ٖY0LEci `\X֟WYR@^cr40.)6(>^lH5̸ QV5\#'A2XQMu&eH $2D(ic7G}DG3{skJ&al^|it:7!aPX!`BA%&6v:ddiy35bcf}'!K B&Z%|[YUKwT!nE]]h0]̚wWU#)n8 B“17t%Ɛ VsL8_a~^D%eV6pjFcG0LQ)1b&BP 80=Sfқj&VFN)Mp3ѭ4w^Ncj c)QQ9fI$#`]i$VsBO c[msZG C bit*Eq" $("l%T@AH;b&2;4a*X#e>3lAq&A5,זOg2i(NNKX"^gB6-҇gDRs:FW~tPXrKc{`ma2]s+#a_\) UԻ7Ye:")j] :ڀ_rAT aJSw٩sTк7806t쨪Z0ɔ6}\f~tp8*Wxq09V>xZ>Ũ+w^pJ'ֵY}~noV2Wtڏƶ:\@,0|,W˞u2a0{S~Jdgf]3^ba4;\9I9aV;^bwY}/@"h;~X,{ޡ+ 1jCiPjNJJXTQjT0Q?/rId4[}7P89+sƄٯLK=\ -0ɉeV*c~‰UwIlԱv.C!R: sXMf&O#{'iգ"Mߋ(uї )[81q{"j݌aIï)I0ll*H&ޕ1AfTj_~2aƣT86ͮI@fZ[lc`"(䉰 XPȍb† l13–4Nx!pr &b.>Q[.nQ Ii Rq.w:4s HwY^Bz0ۼ hS4,fAf3Lќzn.λA?.,>cHˁQ$O']/_b鉚_^o<q~|jɄ stwrDHzv *"pjRDy{Eʛ 1J-v&`1J0-(ST~A3F`vRW便A s >q4&/{ũhkrDϛ헟914ȜǸbFo=w*$89GF>G8ŒԷPU<5E&:sՠԹN,PH0ס2D ?ZtNfiOv4Xu7 D^<|)>l.i5/=(gU{_Nx};G3ĕLsZ9>%ߙQh#IcϠ/!HԼ/:~\XS{L&%n}T}GJs 8iMVs\I!auZы5ykaʎ;y8#Wݙ#b.-n iMB/SU-GwabWY919U% ԮZ ydTmY ǮD܅.aZYE'8Sh*(-Sũѧ/I 6J^BpoHZj&U^HX8_إE\BPhDTÆLm!!pEV)/n)B83?O0:Ȭ#uT0Vadg?̬FB`bZ2~v PǾ XrǏH-wjڶ?N]o'{iӒ-ΚsbK.v79HI%tFTZӐ8E!$QJ,3F%N2_N/4Xc=f` „'܎(B5 cL,R'ޕel8ƽNde z 7[S?ȗ< 3ngT/gd1ѤOVȀ g@,)…=ypgAO*݀)XN|*%rfrCd("f[ƵĞOnLCvA[_aHl`|_Oh`RLȩ59D+_ؙ>gNs!no 甑6Unwp赂[5wlSyϣ=24>MnF@rzqo6IUy/stQQ%SlcԶ8L_0kp ; MƫB!ЂOpD ey&c(nAO!Ifδ;XD% PBNƏ !۟񃺌4,Ő$ Nff-,|^\s&6_Ǔ^0?Lcsݳ+X*~2QBHq0+^^SHr@v(V~Y ۛWȹ4H̚"\4e8$PhC$*Y 0 $TIDF0N FVJd1cɘU>נ 0sOgyZ`hf x5f~of8_wכO~{mb/Wֻqi :9(ȸ!EÛO@fh uZ]M8tJcd*>/eX 2GˁQ#@Yf:O핸|pϗwA_﹘v~l{KOybhC|ssxnTDN(Qݩ/!>8&k6,V9+9m Oj#KĪ`9c#|N2ŵgk畞5…̘)͆j9CAM\ip2$J1)Ca$4* hw IIX(,D%(шZmL5E\Kb1x$qLѱV[}m%]={ѣmgW =K>ssk0òf.c(1 & ,$ F I20J-kD-~YY6s'?ooGvtb>!7]|9Nk q_Rlm߷K{CI Zc60{Β))sڜ3)6潽s5*蛙lz Lf3c웹"(퍏 toY$| cmt,|-|Qm RrG $aĂ1̀Bm2i .y=?qE~{Mpxij:FCc;ٳMuM>×͐.+]t,UcHiBu#RXzA#]T"s+\'IHwRsJK,L1}TEP ݟu Eg䤎Fi N n Ʊ.XcDFSc(""FhHPn|4P1gVb*kߟ?SBÅn׳iYTw(D{bƆh%81ugW3uƔdT fJg q~`)w)XQr WFRsGPIӠ;C8pJ . ځO>&|[(Pkg*n8_CVDzEwM$;Ԟ&tH3c$'bȫNU?SJ}ŽUE? BFTĝ(T6jM|y<:q;&5rHu X; "GU02ؤrt.`v*|>h!TjA6KHŋ6ІO1>: $߲L>Nvї͔t5^:e7^U r"ٴG"wqҐ?Y'aܹ zS{Gr=g +qb7W]w5۵.z)}YMY\7WMjoW%9hy*sZh 9(XKDa$:RpmVs%e= VV_wt@0G%NϴI}J; P+/`HIs:A'}N( N!T:BTI1mR 4%>VK2i6H36A׭7!XY}7$Cؖ1e)J3MӠG?|V(C|/ՖU4Z~RNpUJR:JV"!MAlrb A,QI@h'$8;ɒ@U:1%:n C6Hp9D"5hՄF"#c R9 7m Ay-$i *DF!l)HlϨ@LDWT#VZd(cK=*j#8V吃$x,-R]._|wH_ .hz/_4"("꥽-RLauOZ|[=8&ǩ49Nq*M7NcedG DY: EʶNGQQ4Dt۩a(@qKI&Aъr8Dg<R69ȰAM2lA%m.!H:C,'@^!.h":rxkaYX& z*n&79>M7>H@Hb2 >뵹*4E\7 B%N S"A}FWᦥ5[~-;n]C64Gd]Dz18<֠)$ůaBrp֩5c йҵKyF^3򚜑䌼WHe[ gdh5{ #w JI#TI&FkTyGTtx*Q&YbiJ$;DD UEeㄞ3x@hp-sAhx/(I\h 3YUaWqݴ`y3~0{f7Jm|Y#=Q7_⸭SIѪ}d7};pn%VТP5P5*BE Hqd,?-D@:AM*@)e3܂Yԉ'K%&kR(kKU@i(*po[92 !Š _ߟpҩLgp'ŏ9)2"a{8 ! JIȲFMIK7;UEEeJBRv(WjD:TVua ٨ )D{lPd/+|]YwZfrt2~xxfb=ɍmG7ɇ6OzrޥMo.V?\׎CQekj/۽jC(#r^7Ýu147w Y/xa98۟R$HYמϏn>^0s80sӸ]wmWł{oW+'Qu*At\_I 95KFe5 k,\?>Y0?\Q?ͫOi%?Uc{ʷ_4P;k]Wp'_n&ӗ J3g9qȎٝd*DkjVQо-u癏>cF=iY}hJ|F'?8VT'L ܥj1(!1p#&%j"ݪZ݆?8OS u\'gG/B E 裕zݭ/b9"ObkL)Xif"G%}t5,r r <ޯ5  Uk#JFYF5Yf}CB p Z}3Oc`;*0 ^쨂gE?p4r|p0H Fr!WMDF@RZb pMpU @F(@tx퇜`ZعଢcsIJ"lQkأq,"!%IxsbxIVCQ.F\d+W3Auyxs Lt2tEI"*F7* I1@3 |QwFY1nA:œI*q <ӺF9h(#CLK8Q(>9P:ϩ .%rj@fl}۵4؞j (qnmRa!㈬AXFȈW14;6^\ rW`u 0gP m6|0 GsϪ"~u;)j(wiJ"~>82֍\\GH qK*T P"flU*6 Y-gZm ^Fum)K S锵弇!\l {)CH#;Bl*nEᖫHԅ#clN-s($jH#3'"3)v>uFTu_ !1({F<9^]se0acQ-].5w3.zHC:Ekr8Nt3x%,QHϾA=i_t,yRhvm\xN`YԌulH'n6ng{e씑_. ́yd}mps&WbO»bh0S ec|yYI{Mdܬ Qye.gM~wIߓkfSm G]gmBy6/Y˙>'fϑݷ ǁd-m⛒^2֞heo>OmuNq,-Wv:c>* ؓT!U $jKV'P[g[p2Rxb/GM=VA*q.WeVlPދbD9>䢞h[8"=lF8R+f+ȱ]./G܉g_ ڌ=y.s-%:r{caѱ[˚ovCC o!/ mvek!N'wR$m0O(]y@Sq]?v}yhn {_έK͎1сy%/EdNI"NNIwg.|vy-КbwܷcS\w*<~/Uw"}Y:{m!{=3@„8K{,sNEOD[~ 16oaQb Kr"{C,[hy^0?r-t轅ea{-,j|[Lz9xm:;Ggۧ1T>;j4V;:k}}ޕFr$+ 04-v a !^nؐ#duts< a$2⋌ȌV!z+[^ =پ *xu-Ft5zr_}Atzu?#=I2GT^@uK꣼r[KƲ}..0{ܧCESf9fQ=lze{2Z4LatIB+.ˆe)\ol.S蠷|7NY&3=|VVom[ > 7P !m=*F>i0hw&2p0*K\^&9%%wccQYҨ>V4|Tp58s& 9 %QYbYrp,adx`#IuG1W'wqۗqGLռ/˃ѓ;/(j1a֣' n_q#-L=ڑ̙nT[*N6s \7^)n3xNFכiRY` ԡ{ƙNh$-xǔODRZ[n)T:E[ĕi}_'N-֦~0^K#Qu2Tq Fo@yQÓRZ#ْcHDod @#k=c#N%cҔ16njY+;*N{dCGo*|j~;MÖd}VWB X|:w=Jz "Q%'@EJ#C)4`TA N{) 4>xšṞ=hO{9,"*]L~w ,tO> Uƪqmۯ,"lsľx!1}h#h(-C8tP::r@4 $sV+8yz\˶/u'o&.?ŇEu[]f=HCWB*7֩кnAÞ%I .wV8rSEAd1tR(E C 1\ɐ7%o"uB>^)CwV`ؗ$ʱSF4do\˝$zI0dT֛É\(k@[Se8W*kr!h{SeU!Dv,vUnLLrCFP omh=9'pՌVcu44fb#&sEtn͘3w&ei*aH8eᇳaq{t״\Z6G>] A9}(WOI'k"~;yί,Rj[FSܞ)~7]2J闿D< DچK6Z~WYb'\G%.!r?gqM^?H@ؐ[fІ񨛳e虩*Nz ڢ e?m\q ؋;ˡ ( Y-aՏlhPn{pee$ h 8J_r *YKPɥfC**Ô0I+%"Yum 4[@%J]H AMk#.߯\!P_T^ej2qM|{Z;eWu8:~%rե?;%Vgd@7K8a*g"@)EM̫F>s mR`r@))gރhXDh&%s-gTm9#aMa"g9CU#|GY :K\r^d曄$aL16ӶWz'sxJx6 u}#+LQp!Y\>Qg .J`QOg躍r5QῙn}Fr\[phb ݍVVrYYuVG:}bɗPO1‰0ǻ6u/*iߣvƀ6F}8~Q)y, lXC椪wg??Es\<\/ߝ>̑wA׳QrΨ:'%py[-Mx0#9FyotqI'-GO'~ia8fd5d}PJdovB뮁Y15O%|FQ#kxϷ- -h%jys$ʿ~Tc,3e;Q :1Lj+ בlFhfRֹU?ߪS$$k?zr*+O%mT)j,'hQhM!B Z-j QЧO-`"Q1:b(pCQ@: $/eco&aeNTM(8X|Hb%gYm›~2nym c9OVa1HuЫBKA? XEI "&?zlt4}^Vn`nN~LnAŘL^:+ IOQQ`]=?W=?ˮ=?\-cc#z9ǿ$qzykcm|1[ 'k#ĥCy/sh-GaԣZfM &'ЭI>cX0LIj [,[d rYwW[]ѵ  $zXԎl+]էXTև Kض/(7#{l%O*RSpN 9bCv JȮE#|I>[{n h5k#(mns|r9 *Q_G5T᎗6&_`Dr8$z~AkA21Fԁ}|z5rnӹùL%|ser.%%jv.`k!{#˔.ڎP@¶I`9GS:ɴ z U~5Zݯd푍q剁VJ6;O+gJ Ok>ʓb!OI2#+)p527)>ݯ%zϋh53Fi4mQ\țQt5Ve?рe]7 &ޡ'ߤ,LJ&u^n5t>zJޒOP@=eR*zX\S͹*Wr@C!Z5ʵI!P[8Ryg@oH*HEXa(-rlݤ {&eJצ 6L6;1tR|q0 ztzwa* ^u`(j1*dF\LN6JLVoᅲq|7N:2j7GM8_'qޅ݇hm]Sa: lMawOŧ}'Rʞ9U>uA许TF|*;owt;XP V" ZvAyÆ#bj{Hk\0Dx#9J0>s#q9TFvH{?ɗ<{ 3 WXk#<Cµb*ra?R&)9opN5yp;!=Wx&׃m75> rC13f~@ ?_&G&P,S~q䒇v?{֓t?|.8igl57sr:ޒءAחϒ9.nw>E.8I6!I ҒDYJ_NaX tp/\@6I%O7^:%3L{2LyG$j͓SY]\gLc(ݬv)xF2ѱ:CrPJͅɧM^EMڏz5Mlid&2sVXbi%@S $z RJ V!4 2g: lI9[Z$I!S"OsZ|֚@ Fx1Wϊ.)OO_1xv(ӆo/-3B 99h %a1'47J8MXʜh킊se;dhYy>F)6 ]T'# myOG/Y N%;ѵDI1j=(`4Kzɑ>m6j[n6\R]֧MӦ5Z K} Mdi6%%G`Q۸tOn8MvUCUCݥ V&F,ԼǔNcQX%8P_;a2bmQTmާa2@f)=[hcttpz=u  N!򌐴>:uo#~V ّUn͑͑7n`$g͍DTJ$Rr.xn8QZ juZ,Ef9rhGAnȡx2 *#f~@)+pde.!?R NR0H%#æ4x#[YS )%¤2j,#lf)Aa/Zg$ݳ}7O֓&wRl#* IUi4#F 3FӒTd r ,K *M Sha _=7A^EAIfjR/ꩃeP+*F+:+jiPt\ʦE[jp9nprn4Bߢ땤]*zr"5EǑBKߞõDBV A0^֌2%y>jtiw7q^i2Wɭ iƁyK0O:I,.\^&Wt|,揓=p?.χ8H8*xtCaR>g<|qWwP7pc H,x>:J T w7 .zOߠi1_rjO>]xe㩶n^.)ǫl^%T`e 9"})G%"mHwn8pqƵv/Ҿ.GCWZ@ $Gp7[>ECݭ|x-'1n˗0ݹ{U36F/W}϶ YՍnd7]t XS]yxGwdƓV3$u2Β|or\ X2OILX@ %ϭ/OBV[)l&8X7ETﳦ#<-GԚ f}`&|g:iIGXzTt6Ǯ;IT:VE)J^u/!=U*69r^1#\<:لJn@ _u7 쯺L7A<0uF^&cuc0\(}m ΍vU#֯-KvIǩV S` $E Jes(e4j ("5pJ2cb> tR/ ǟ~m\~^oZZr_9K DV ,{@ cV5U\y={F,4ʜ1u{90 $/iA{이`t}XI 9jt(uL܍P?jDz W5d!5^ geIS[ʒԲ0w+֦ٽX<䊑>Y:)3^*8ex^Ⲡk^(Ȼ*/N|{د]Z Su_3chdUq3w=ܥ԰s_'lH_:IԘ>M0sEn9sb͉غ,Rۂ( 2^<$sŌJS%T!Ķ!RR.!Drvvykd˥0>/(u؞I2MQfFI{%=C$_tI`Ofu KWSg4Hz/S+UD˗/H5]'D@X%_'ʓc_]~)|RE1-,;h;@CH4\^[_tg2֝cin2I9<40Wgԇ6l{wyx(aV8jFD $̪)8@08d8g2t KrM0U ҋC v 3l4{3C+ Z.bЂG08N̛΋=t1бZOìij`B2{]U7:(R.. %tC] 7*eBKX!YBIjt:)t~&ˬLfL)(D&~mʴ.ɫv~5rȭ_UArKg%2! 9OyZi3 i BR*nDZ,O^yՆv#fZ9ף̶j|̵N7EPIC7Ņ:bXd<}myp۪, (A{<Ì3RT*f3~J<#fIDL7*eΩДd/Nf7m:)]ujw_2x;dž8LT@ㅛWlyAQ+Fl:;[9A9lybʼnZ9ʞ>`~Y"[jԡ~۝t~g_ܥЩ0Nnf%')g]IQ2_g[-gp!DBpC6ph 4'6Hq[UH)S`XJ6!xRۜ `EV̈/d*eb\5so6`>fB Fr[.xj@9-sZTq9!A V3gmpZ,2 [*^TAoe()'bk\mF.4:Z̰i%W1N`KAyG|Oq;> VoqT,O]&1ͭk S &#+拕8wьorɇI)x/Rګ n{ FhZ<ǯ8b[n`CI"l=Mpxwkx5Jq4Bi/"H-yHԨ,GI8^nj n(oxZ =@=H-oޕ$Be03[Tއ?4lOciyRbFiDb],z,}A()3*ј "-w[kgjVS Mds ,6 ]$9P #b9j["0&p.M XcY$!<+[03mxѴThwѴL ;MaZ(y&- &a0K%ByI&hALqt贏B3mEhhEkh13j-K)s`7A&6ZGS#pdM0iy!R`Н`#BY V;36F39w`x9c5]֎#HӰyv?]N.ufތ>Fm^twzN=S9A`=!`=nG.su7*%_*d{1*~RJ(V  bv"h ^'Ml/o[ b,AjA *p`>iSP{EWJ]OePG݀[UR-mWҺpV[8i tn nf(H:aGG; +<9aw#V=~Xв $'ffK"^>9Br$I Iv?A, (_/虜H5UJQպm4vD2e790[[YGNìj0C.ؗ^ʲ#I]:qzz6PAnm*BabT`ݐMigR}' *fT7~3IMX0\EUi`qlx1z|~|sӇoF;,UFGWFO>?tec8ژQ"k5HZp`{ɧ L}H¦&hdM-24KF=xN.$"dMF`郑㻰(L'Q^ wvK~l!0EENY-J uHICkܷE. Ś1ӂFH"}w)4f}L]ͿP>۪X&^ZB}J s>-_7竍RnmZQ^\^0[l ZՑZڗn m`VJʻ){^t=1¿G[s[r'Ȯ Qv\GvlCiv ^zXGnBprZ? v%,WG3aY VGr%CƷ -&KXnU|r\~ D]&m:) id&5Z-sَw|;g,o/A/Ѵ-wee;'ʼT9K*fmQλ$;>ZJ#;1I4(Xw4VJ\‡CeAuG4P&)44̙O}֞ .cdU}OkqφFfǜ4+EȶC%D*;|C w-O+=Xx4U#lCdOq~H:j+{o>KsrC9stH%Y8SG)8;X/XsuNkk*:gr#۵]>#r9г?'dK/D9J˜Sp&Q[Wm*IpD^@}X|^iBWtNu@yΕ7=Tf(:/1 [# t'q&E*Tjj8αjJv%MO']YqynPuZ.%x̕#S_C|g!҆ݲ僙]^O]~z.Sy,ㅩA^@R< {d[ksuQgȒ%1ϛl DT*/9ǒӞ^OTzP 4JL O2bIRhrI8͠:C?~S|WI]oՀMn3tM,z4Zt7k8Fn`F4[dBD޼ρbf6URrTE/F&|3~ݏB6P?-wKLVO|oh* bBQ+Zy%"y:ԟs4Wҥ%{r4*S.ջ/q,sYKs , "|WTJjg)54mjl$`v- Ea$6:/+d`1i :L op#c'xoRQ!H!E1,k7J䣂1bti`$f%@"rv{M|щbM4܅x:`0$1~D^` Q$)1!Lp!P7$5 '.iAp}ip'aYrC)1228Vf,y 8*"$JΙ('LqgYXNn}x@oNg>]KWgs`6S_fɯ Tn ^ k֡8pRҩzTyY?ivf8f/D=7ۇ^Rp`җJ?>y Bj0̔yȫYb-DDdgExz5f__C3 Kj?=yWL%sa%zrm拿ϞXi>Y{b,(O"0nWr?B[c֤c=5fД<_,?otFBRgI5g7to gI5]Ͼn 2oMWtL1o>IâsoW}jgSC8"߿WؿR^t1ROpI0,iL+E3Z"3[d-`4˰]XJp XTdd{Vissͣm {*F4^|^]=eR*(1eyIbHt Q㣰TKI^XWKknfEi0-1*e AT8a3884ERLH xYqO-)ʌ&?{WXkV U2tهulJh6e(.8xu57O[RGw!+w^TÃimV ͞27[?a+)hǿ3{}e=o*?/lw-nV<Ļ>nʱK]^%dT.kllj\Ó5Ղ[[њ#b'۞mB#r F,J"ӧsh'[-2tM!JӢu9TUX)^>,7CN!gi9{Yݲjտaq}LzZ OUB?>yETHITȘ0"2c4 BrWzG/#VE!u"Mu N)_Cx0(,I9u1ؓxНz^Z W7fy:W>'7a8^zog_Pζp$thͺ\Gb2F'|Jq&3ƅϔE&Ch$|qw`WZN&hLg Q쏛 _jvcD)rR*3sH$iJNc)^nZvKQRj栨aWf:xj8R;XA1qbN=G!YaHF(?#Q|w{Z_SqJǁ@kqs&hߥX -YP1@Y6Vb+޵q#"`W6EҀbo  d6/ٌ^N~xZz[-[NvQMWVZStIΪk|tR0exsUp49@۸m8ldy~b޹9?(*ҨSEzoT:J^:k҄]M̰SrN+}}?_-dѦa%丽t1Gaq^}(jo[6E&IG !q\2 ӏp%c `)xm;W[rD?xIi:r8oK4=Ttg^v QcgQ^J*YDQVG1E8f*B[?USpVsZͱfݚR9dd$,4k"@n/G>%S!z aW9>SfL.AqzKrJyzbI( б ]^jK?gcG;|)-='{(Q;K櫱E?T2T^Zs18ףR\]kK P=c5k6/Z4׺ENI[Yսj~=x`u؇6MEc( 0؈fUU:p˝6̱VLӺŕ1N]m S/SMMS!\X01mdTQ5l^kD^1E^(MTanOR\t#KX&r*lz >qf30-S95S N=z$Nn#b9քmI1Nt@#C7ZkӅ8;o9)*}61bA)˵NI %r2ň h7FYv"hKQ*5c;XT~vsuve$g m=k+)E3LbZ$r2оRF9>_3MS Z(3M8>5BogS5L׼g2%#9QU0H:kys0Q6_7-9̽!:oNxp"T/ {Qr_9ҴW$pKͳgNqDҤ ~/U1(x m6J ;Մ4PpѝuO?g>?lr2w. ҁU]绀]Qjij'/RJ mL (G*AJ?@ңleMyDRqP@Y/{0zMsQ (hR̿y(K6ACj8$"2eȳtq+a=`ᜅKuo(ahe%+qRRR"GwmPH>wf QTH6̱&_dUz,;չka݌sun-SGG46TΑ2RH:v6 TA%Lr4~ht6 CU弄&)*c'CScɗhj%;-œD&*iGɴUjpia֨3TJ)*v s|TDpQ )WRs-iסEtP٤:Ybj1|4{W I"6 Iw}YJ$IQ[ZAonDHk_Un2l@q{d^Z>{k_:o[8 +;wWL?R32Yhi&On]*CD72^7L {/S԰ްwܴN=Z:I\\r!,T>GPi$T:Q<@!}^-l9o_O:sR:ѳɫM-40]Ob6$sNW6O^j򗩭ej+_uy1z-6p(Uja!TPjDzt"(`'='Bch^㷅lbA':_R1iU%eL^IQ`*%<Ø>JU(Һ&iŭ1oM+HN~0k]=IYKspN- 4ZLQ2tF4˕h*GxI)C3w&J"|VK 9"p!NE樎B-L'*وs[ɨ "" qNw_|A\P_<ϫIzx+10iX.&$PϜ V r$V$VJ0BADzq=IZR׋:X=!x@\.wy9t_b#~=FO2BqB^bEBs>}|6no3_h߼1Ë_ ˷J~J'<[[΍}%/y[km "kv?|{_)ש7Gz/r%~l;1RL(0R:`*ڳcDлM{e ʀ ! Q*t~bއP6=eiJM e;(wt{A-( 6 *pGC6"\ӊ)cb# FfNgC4t ;rp+ W '9n{=NY X*MA*btAT ")A!Zd7 JPO#**HpHA 1ذY}!6m]ذȨSjivG tLφіa#u"$iDv9tDhvjZ8|΀&0\ۥH+ mw<Uo,pюͮM)H*jIPҀYL`,T8A# QQQJ\b+Çe_oo'oµ%ExOv~2`+Zhc:cK_e޸>|X;W+c޻ǭ.8N|:)tAM$j#:;a8!0#Y XОyQqδI"65&@*|s1Y|~m.RXm~Y1QxQMN Vaiysa+MAJE:.;a|MVrVuD4ViυG"(jXpi%gV|cR-tJ/0:\M6*F鄍7T{i8Xq+s½deܗ1́Y+HswF`$lM1||c4mN1lQRZi/X@ AS4]ZnH( hmE!pZ*pGE'++< KIЕq%58)<&08@i79u,WA)npG,!Hi 2TAp紫$$c`+n4B`̉VmbmX;S me xJ&o`NC,=0.IPhox*V9itu\,2qd$K D MT!KXʥ!\JSͭ^8m`Bp)[9DaY޼}>2U'\D`8DY6 ;b@S>H^g(zV4% hʒ@e0*@uA <%A|cwdZ!(4*? Bʏ(Je+&ФqCV0LF'y5^,/xQ%L#!Nc-HR/^*QYf.\4+Vz^iXd ђRݴu nrRq8NԻD˛y8^ep?3.%w kxx;{w{2 V'z@@vziRY)*OWsof  A,ڬ]32P9!Zy yɃ0D8m!1t⥀Z}mEq2лxL1 0 hu?myr0G23BgAY%Ǻv@+B`@gifGD #tk.*KR]d n,^AG!H%$̋H5x$DZ+PvWq2:JцTiV6E8E6 fnݼs߳[!:'t '#+0';u;kx!9%X9_7h.V}JDg#T28_5hc {<t+G֎0ǘE`kqI:zdN#$5m`8w_iNwr 7[tub9Fy;gг>! +$F;{ N;:(+C*5x]HB=A!$̫*X0kIhƒeOrI5`CCI׮e 4gI yARUkœ&!RTiM¼+f 5 c|\s=w/c\Ej8MM8Eehj6E7{v/(5FGÃ]| ݽcP4p=4/у۵qReԀYgSt9ze OQMRg0v-4duΠy2 yLH<qI+y9JWa03Ow[`m1D*Ie'b6e hti[*~Y*L fɔp/ٙI!}5'1)8k`Rp%.K `szK 5w#8zC""Mݶz, H7Ԩonaj_uem)jCټ-_.wJQFQTz RH¡XIU uwզHy^`*FK`9Ac$]Y*. [*ʊV\Pj#)qFP* ZXЕ8KBG28޸^7x]Ƃ:f˕I ٿ. %8 mR9d]ѷwA}cQ[Ɋ,KV%J2jc]5X7!sV5UN!P)pJ˱tqTO.hp=8d: ڣR+ʙǎp r ~P!ZSCS'`ܞ‰% C ]dJ(w 1?•n'p+#*XY0F%brˍ3rky*Yjх N"PQ.k٨J$JCa;kIi-0C8L3w!} ADTrgcS\+!e4n"$*%R"*$Ice%AVb/]$e4Yu&6N -2`ܦgXeYj\!ہE$e4A5$-.dp0L)8 GddE 4BU֙T Y,R)Nm. JRS(6 3@[ . Ft3{.g ^oK(Q Sv$bvn.&wfΫhzlm5. fQ4Xq1/cȐ8q[wYxpB{ ?A7~vuT1FXEXrFNd X BEczXnى\g.@f+%lN_Wk(%8 TvKG!\*&#]G8[d  eQծy5c0(UeQE}ZЖZ\w5R@fGרp V7F?˩3pjr/V{FGuO>|}>w+x=[%cV]iE7^P~v.Y a/M Q5jpl}KM?߬v~TŇBQ#ܭc+Jm&,gZfeeS[k6pt }Ƚ4&O(7UKaDP $1(`v]=(-%BɅBĥȫ3iIJeBFZWkM0!#j(Ɗˊj.3}5"P׼itŢEANPע*0FEBWY ԫut U/9^\G4KhP6a?т+UAc.fpz.hň#7Ѯ,xlC &G5;4{es57XdCفSt wA٦nlP_x9F>===_lEA;~JN/Mj7SE>TEHJ*$eus Zi_fX)|n/"#% R|̞B{[`1:!aSGEK[TPRuIՍ4JKN#pF5SMp/?Q#dODQDS8S%މ]̪u}&% vω*E^G]{VHY(P5[o.G"q! XW?Wndbѣe8^bTg'{Qs%HRAcGB ~蓗w< i8.&b$7 :o{~RЩHxUDoo-Fm3 { DNrF @HZV-ס';$HdJz]۬ZaX*3e*FAK(ZV:# gox!=$38()Rӟf+R5&RV ֙ EI\ǡf\&JhLؙ&L%'"&B ݾg#n5 +'#kxW*Li.nJd#/PO7Yg*쓉>r*2[vnH‰{A*y.Lf>̨7+bl hOٮ,Hdg \˄2I&eӾ-d 0fͬ4ɁoCn|# Eō_eRdEsjd/l)>/h#D R9͋3P zwoh4S&.v qf\Ŕ 0">?- !/sS?|x>UݻO9Բ9ŧ*{W}Q=̃BP׳ek.LAcrN(/NCsNĠ<,1ICd&x!$ϔoiTDgr<>No<_ ^xHaܖۓF\}lc^g-_hFJrk+bx| +BERW8ׄ O޿ڭk=\R&W$'b{"-ZU99\0䱯VO0PԃIɿ Ǟ:]NjW`K!A jKTx$dsw9,RDi*U @3dhg.[\$eý[Z$rrU|VjRUk햴O 13Lj]"GC>Eqkf7UQW!03/ZD1"hҘXf=T!r1.FLS}U$O4G: )j Gv3zE Hے!ɩʸy&JI2cXiFYےZ |~5H<&஍^^SoXe^XqoMY34 L@UV& s AsXf(96jZ>ZG(p E#89"T}[:vf{r&cDRR1r5%F1k:fZ(:ZWVݾ$RoJk&GvOT"a"ى 9wj`'K?;P {: (RO*J/<*"jܓY3ȁWW}FFSI5gwB.줹 P4RӚ6 p7nH==Vw7*y;x tb%&Љ!Ծ*A1B͎ŌjA i^2=ER% /J렵)(!*eؚaAo'39I"PiG~ۓ%;$ E-)o:eJG g0CL%Lz~K+ʗ>Bk7 _XYXrc\nMߤW_FSDq=K)Ũ(n,UtY:YyKcƈ<}) 5`~[YAZ?ୀ>$pY痰]pF)ϯ?@^AVJv!V ۆw<6mhـW8ێ 4fIimנ] \k?}Ih~2w2wûI#.瞂8ˈsɩA{_'N 8Oˢ]j`|GV3H#!*Uȁ5ӀXdR[n1X3!orȝSR炝xioqt1+lJ.0U 3PHi!>T(X'dQg;&I9jב3(8)CΘ¹G#7X9I" 9¢@0Y0)AIVB S)"֨1)}c[; t3.FOF{)An#  ߍ}8EM ]YNrpeiaaM5 _ BH@r˭1{J)"< Xa NC9rðF3y5Y~Ŏj#i_)S|*5H%A\1 DE$l) ބ8Fzkv39! Q-rxV3+-+p@` fL 7<4z, C J5;ML"{PB ޽_-&t}탰|Jf򽝅UŪ/U2ӏo=9+~9{w}txwP|2p "G8Lo>],t6_qb8O{-΋2N lP On`C%*ӿaGIDeJ}Շ>xQ:jE3SnV߿Bl0!@j5[I!QK A|n a{q]O1TO1rqLy]^ibԀ(5o"860IrR R-I5qB /6ѺF7Wc|5L-k@C Mrp0[|]>&:2,ΪbåWf#ܜS] ³j5u~>ʱ /4ٖ|^œ<ٞ,4 ӹ{*ac}\ խr]njRxs0w/5&~rO,>[VW\X]oFM:vq}(}^߳Ӿ_pW6(jN~nɹxyG˼Gwosl53#3&R;qٓ3mcgaRpIDa&r$3 ,)1\ m/4Zέ+7~?A` J5ٸxrwyM-ʭ7r]7^Z:Fn.ŨK1zfb,.I=ړJ'#7"u)*$hd\}C`4RDYzXq\ʎju͖%y`p\|X&' BQ6_Ӕ[l[i Q%4oN~u|#||r|,,Cj5@#⠺X7xv9<1λ$)RF3DI>^EM95uNbN)5~RQ݄ZT+Y[)4vvZT"#}Rh,ǭa|X3e"+Mg.x!;'xj?#QJ#LVqC G~dt^ҳen 1N鹾HJ0aPނQT[znpPP}$oJ(=HPK ܚTjx6]hR)9E&7#V$۟niѷ!9$Ah$("m;Q;rصۮ/dՌ0[JΟjK|v~P)FlmSac␬Li nS`wINSri?)gRv(_m} ]liBwf,F]g:7.k:ctq% y{IIURi633D,29rD%*KޑZ&=)(3L :(-]kY@FH#M%vA5)4ckpfE3.!fՍG7jQ\7Ymvo-O'se[*5E[$^e12vҎQ:7p9~dESV#ơœni.;:DdGUbwz*=4qç ^xQgA/ IHWz[;Lz cy[^6^' 'm2w #:’pi1Fo~G4 qHQHqzG`O֊I}y68\a^?arKx *1b>J6)T&~f7+a<@ɚ[MLP>&psY*L(Aފw> fQjt Tʾ/m@o/_ARN'OVa:bp0k2:nvfpo90Iya4p9`zBZitРѕϩvӯmǭ٨7ώ\9slneV}L%J)`>O鹇(`!?=-Mypj<o \FlwEF'[iiPo?:BQL2wvgZ۶_CӎiqL>4s;tڼ^@Kn!e'>wAB QLc` `\"&L.NpSyęe2.e^xFy~u45g\ k0ϵu @|diɰ'@;hIHcn򀌝7+>{#rHiHbo:(d@偪va5=}2HϋM+; hwrZmd8V5ZQ^vchiP2:f.dk= 3ϭs]zmGSNSa\ckl 8e&׹ZGLYuwztFFa;l"]~-1{zdmSez>e &$*0:3ʀ1YcJ5E+W2T4NQ'縷s1Yo2uo@4(*ܘƤ˹ћ۳w~Mˆ\O}'z؈q=E^$OE0_Hb13 no':aHΉImg*\Y VX1nView 3R겓YjA-;,_nZePI|cjgn[*K]3ڠV0*$ HFH/mrC_0T)$RHJ1qC@ E]2Ik 45p"~;'7=8iйk}Â^6 u CaK!u; >R8u\2ȕYPN(W #V )?OÀ`|d1ɉz/..W;nW|^Nfs$ȉr~|)FhBγʄb:Q"$`7sƒuX,O՛a#.K09uv`[3ѥt"2\i<8ay :ÄfTd+0{l?4t*6 @0/ x[j'Lnf|̶YEÍ&\#`_-aƛ}}Qab!Ae '7ji8hb> Mg:;mifxQ;}X܂p p甡& Eܸw=LmL(ΝW8"Ta`S_ b_oE!p~،0-ú|0jd>as|ĉ[zI+0":f<㾈Xs\}_zצ`dmoEM~ݩ_.9HHCKrioi%' ǧ (ijCZ y,t](N#XxB[%PƤ﹝D2R9JnO*~jk%yE4+ v4CYHB/\P̑R>)DqF0V`zN;z{((肞NHwe^t9Ħ6*|$vaSF?휼HUXNIZDuS9AKZژ|,l./UY7cVӔ?G| Zx`Iys(a%XCqłxHy9v0s kIPH\Ic K&YRn Ĝ}ȵgDͪ%sZLFḓT To+SZ>z_ة0o]8/L8Ƥv bR 1^!栙d`xVΜz-Vδ0f[ǵhJk}5nJ/wzYP8,[wl"e0L+fL%Èi \Pws 'Rb:VWWk1Ε8[ i2G線2QC}ozr@ lxJ^6A=glꩇaYgK y>zwq4H+>cv.t[3?z`&cvLkO:F~* Fj'c%!cZnVji%(Tõ<t>}Q7G_ܧ@\(N,Ptiw]蝣4(Gy8`ZndM-͵r}A^jFLU ωk1/O p|AS O=7n}>>柖8}n'Kq vюh ~e hQ;i .iqx<P?p'Od7n/1wgWq3wJ"IG.@?_vET<㶻Ƴ~O&.|~Nj3CF8)p HsLmP~|%_JF{IeG^~@#qG5yEǜ(MI{-IsG')b 8[Pgs+KW,m&&;O9;S` fCV3w4}2QEvC9ĸGųo=VV[TmKOVGtdZ [' /o0F$Wo?^d}k qn<~H ߦ=Loo[[N?9UQK{ F|J*do^C?*P @=,biݍ~?5:ܧFS&.Є?x>o6k!{|/)4|_?\ht#( ҆qy=|0В5;?A^}hzch􍟵YJ;Nm:iL6? KOjr{Y1(O,o@C3t;J?jt}{tyIM@][榽Oz_^`jgt1*Td47v/EWt7ѝ& oyE FaQy{p& (G ~ߛz{[hOoZCKn@g$+ND WGiXvHlmω9n{=nF"N%w&]Ԓ[>|/oWVkQO'~FE^"?~^}]CT.XO ]]5rlNO )۸? G?8Fa7zoAEĈ E.>l lDɀ*es'."d:΀+t> 7Q΍S4nou3Ad,p[{ؚ|Y扜.%8 4\0'HJI4^tDs ZdǼXHn0S0Si\vX3XcJnl?6rAׅ瀼aN̰}xA$d_0laM*C *[wdA!. 57|V[;Z0F(8̾pQ6-:Xύ*7-+ST><82}@)]}J)fo?jWzUgDN٬ܲe>#!, Ey2P8wY צnXv5x5MV_L9rw?.QkyW DZz|JBE4yAxǪ&OCNi/[mk+_G6\_=n;M+NY[ښӢ𫽼 5VHXV7{>\hn'UpyDϦd^om3-6nm^@ЮxCAۆ6N(GI2CwoB yqu\|>8( 2v'6FZ=Z*l&o.}я)˯oJ8! vcױHSBw|wk ثλSp]jwX'>’XJ*&6].A8CvS`_e/0n1PfR9ϓ*~U-,txu=Z~?=wދoP\u '߆OfOݎ ˞DaluMR(:pI%~j=sދN4,s4C!GVcdT4:AlT]"nO:s3kW07Lj_ԣ~]#FcN?\}󋷛h\͇uo~GVжv~zqZ*[^w!?s%ml^_9߲`|?̆^v#7}~ #@~v{(gjiY _r#U{~vWWIzVmm 29fjǓg?晇NO{ȇ3U;{^P}+Qx[ \htP&{Br8'KPDV,V!d6=l$֖NHY,KG!{E̻ЮЅא/ؾ I+!YTRO^?)zӈ(0Xm(*y^KzG߇o "$DcDC)E%G8#۶ o"[T2fM$Mo{ӨW[sVg#+gҘTKVdKbHKVKRԖ"Dy0BdTԲ6;^^B(P"8^Yx"ժ%j$ n.y][6#h&1`z켫M36&ɲF3vhwb!e6xjz "6L m/\MݐNj\_urj&OwGeXS*cͬ2dj\BxWVZ/>? `}~zgӇ*! +w@C#KfH;vp?>8ɇCy>0׷r^•gL^u,L7ܛ/kIL9 ʬ?Wh~MuT_3kuQhNڣtjj@{'b}Zl9_bȘR)IQ |ZeSXw)"Z^J%`ddp-:1WT$iaCft̷$ox) > EQ V1`LS\͊g<%Ar&ͣ& ${Zd 6vQVkšU⏖55 3ɋ '3lŋY hb Ȫ G$0.WS@GCհbc)Io}TGРXA%]ƪ>K\@$6y^6\1\L7{ne)/;lIKb(`e$[,ʑ@b!9Z\J\dd/gts_Ei9z?IۚVWՔioͣ_ysׇL3[q@2)|y]iˇ~,.{,'x|SqЩZDRy54,7$.7sw%Wꚱ+\ ]0[`X>kjcm?9 G3G^p27m(ec'gpg]s26 6mi^1C#~{M3Ji jki]Q(rÂ4{<Ý,%0*9lNfb12fh} ; 0'>N.֊Wh@W)4>v|7$K^ } S7VY#eu[w NBn&&CMaoϟtl9Xb”xZ6]ߏo_͙5~$Fҙ7 9Y$N tjjg.n|nO'g#/:WXMbWP,uoL ^O` 8m,.?&O"N*Zɒ;>~8kyL냦3@KaPiifSDtwC@Tk?P ` .bLASBK/EuP;oDB.~xPLnH)lxhj}(kvlKBڈB2 aDII@Z|(v T`6"`). !5ɠuV˂ "rVY|@ά{K*-)#A 3b֢s$g yHT[ gPx^q) X(x`E lS8-ry6+'< %(IB(x)1hi%1έڂ_?p2p ֘lWD <,+:]ReiRA޻Hb( ˒Q{(}JxDMVBV6]Yf8Y-Em T78&^D&t"QaA-}h QAPվmʗ A2xRгzH;BΠmLmR: Kvd4H!J^Uꨭ`IͮiPH, .M%7fh+N%g^jPY#!3c*q٠V ηI7te4O[\C+Чb_E'e5;w_bb5ݗb?RQnpN}ֱ{Nm6ŕx_[g"6;BԂ f/| ͆sxz`Snő ; knŎͧ>GrG^ߊ)N/+nɋ#_\$dz9.!ssmx Bmnw!*muA`wčמ ^OPWcJ!ū ;_JxmQ_W`^EW` 놮sfۂr42@VMKH Wp:L'5WpK'ِx)fK8,v_ ;NJ6`{%'dM 7 nvoATI{ߏX%^,ZKZKb)!W1.KRFPaq5qp$HP3 dz)HKNSrq)v} a rzN̛RZS—Y|zV\V?L\T~swu0/'v:=VDPj((죦540}trz_^g曐5?{vnv^grv0["̿Rߜ{׷^ ~wf z?32طf?\L}aFvj:[8Y/S[p>;@tvV=r5dyr>Nɚڃט35qӛ[o绕~$%`yȒ)Sh #٤ȚdZSI,0qY됒Ġ Ј kC9 -Y: ;ÌWZH&)BDV CMgJ3"ΰNaH$D 2MT˴a$rtPy Av^UYXO6|z-עOKYl b+625>N212i͔?^jİxό 'sPM9"pb3$ZR \QH,=A"voTK*QcrDbm i*Ldm a_kJQqԹw^Q%.Z{-'ܭ Ľ(~p ZC?9 . ~bCب /SA%7;墽†d~jIB{.VRV{?C ά[uFa'ݦ^aU|N`œ/;τ&I[8?'߳z ^ (WI"|ʕ;m (dbr4-¹:9s5 8nC7oKw%WMN^2uNގTGʻ/_|n?+XLfa^}Ҽw f=[f.۳@jNۈm^%qsm47-¾iD.n1w~Մ%mA),s:@ !ϡ _7Kn )%];IS6! R$$Ȁ.]<%f(mXh Ud'u(04d1p3&NחcN|J4HB*V2¥0|h.7j)QH.$ !t~[eBF1lo\<_pЏޕI6!@lUb"}s$z1,Ad_/apGnn~RЛgghW9.;mtiKo;:1 ZnxƱ$V$2sx feC (h֨Lܑ3z4yu.S%bG }.q5MB//2~+NJxVy;r*{yorQӔ7l9hfz}6 T1 F[u 9i>{que* _g[-sQsx'o/ڶs^;QJ.*焉ת8Jv;l j՚ry̟%jn2vx=[$5AZ$5AoMO 4CF(*3ߦD`C[%U~qe RY1fI3&;\9@KW94rhT I-CjMjj`@(ngNkKȵ.M~tTf9)=v~uhMZdĢTHH?[)"~[3BXoUS+̼IkN_?.4HˍY!/8^xI1/xX^py /S t+;~'H҅E\ JFrz,x)P*´>~p~ |x!!˜3YeF%~^|*~P# Ib޵\E0SQ=Xr-參W3kk0}񑹻v Ir H+/–pW]P2ˆ۞y%lZھ@jSdZ~%9%㽯aDi saqd!ҿHz3],CLDȷe-{:tS.,K@J($ɴd!);_1D!C {2:*o), @ F$hi GlQ֝ A7|LHar\a3P;EĀI8*#:nir J0n9DUUKH@*%jUFTI[s{} o5R jt3P XkQt%k'KLb ghƒIPἚ.x$fBH:1$IN?6rDUjM8Nt98NPR561x㬺CHշGgQ0>yς@p9\~VKֺa>[\/X"*㍯#_oD _ ?j;ٚ*e'7dWbhvnƅ ԗ6po$([A vƇCVQд9 湯lpPqћsqWGr[zU ZSyc )Pౘ=!Bj7wjC7dmƁ1H`m8.K4kt !\>2^x1+%Wq'O*)x)ݚ'TWA-ǎ3V3+84}> 7ۨjX֪((5PJD%Z)+sKt1o?7g>E!'P=Sĉj8vbb}2f.ZL%cq JZ_?vjlq_τ"\(|: ^sPŃ#_s8]^{oja{E!e2HTD&TXC{Ohm `P8&U\+gTX@N1J)fi"B0i/h`W_M %RME M&M:X#nW *zLRKjZ+~c];9SD p%(ϛ>]6ʘwn}!oIg~X tQ>̪B߸{Xuy2K6^/5rD0lx;3W.A /V:#ca]o+E:,j1e=6[x4٢Ml\)nz̅:i꜁@j S~]4Y>X['GGO_BP.xsbQl6#,H/7:ӟ kyM6NjH28y&<jwwUsg2Y~7_$LJ t+ Dn4 n}U^2uմv& !Io$^!s8,o{ f+ C`mucEFZ9zs½ovU)3oOo`H8ދ]"\UodcQkpj-20Vn L*:bANہPȩ{͞"e_Ƚ s!D+rzWIj< e #¬7MqDU"" di -烍PmTƈJ#+#IfY!45U _?o~\Sl:.ݺïM?Y% sN|R{psIN_e~|?>L|!^W⿏WFB6"-7F)&ȶT)L)CdeZ[qX 1(e&3D(Ggy,sbc^ٻ޸rWz 6YdX4`ݗ ^ڕeԚIbRwKuŒ`n *Ⱥ.Î-1v9 5+}J!B՞У:Z5:5jଡ(΋ڱvhԶ*|Jq31@*/(_Pg?/ mm]}&GK]r6kZ tgS2L3i9,]}}GRבvwXU8k=S57nНF$pGATMjjyT8,}Oۣ{wm{#?*Iqlض9rȖ[偙j[_d>*rMd"J/υ_.'gw"OM~SQ6q*V) O>baټz\h]~@Ν'< rn@s͊?/T~ t9YZC3DŜ2i=ȽuB"e; 7mF/iGv0W-~ ܇ůvT0X^w0[Co'auЭy{w׹KY:LI/cyNo֐CoɝV=8_qxwe /?t%fZN:]IIp8Oqб'ݳ]?ɥ|9dEvX@e.e6?Ƽ[L>ɯ'0{U#dPpgQv74+(Rp]:&Y,9m5B"C,&x=ָttg\$AajC|kzݡ=fo=mm>Gb "9a\12أr;r;1{ _ǂ:fs(^6q1.:K{2YpI,w員]3]n_j\_޸s=̚dgj8>:2:jOGsܛæj ]=ʗW])\nJoyNi"^?~0s߈i~<\dg4V08O?~s0/*FJ}$\Lt/Hpyz*7ؠ:0wD:t2nXa>}78.x)#y% !:֐޶;+2+|}Ot=nŖ=  T+LY\+\M?ViMËۏ(Td|)G|l7/c-Kul)9&EK&(%]`|u^UJ̀*oIE<ȥ<;Pywc+& ueG, oֳ99`hVhȦʟ{;DkQK+9壪Յh\XeA%d,:sf bgQnDqBq<[wVAb4JFdq3QMkk,o÷VM>dzyEre㽇s&U$ m6dfdip1TO?ðm%,?|-p0%d{dOx((Rۧ])j&~*]x Vd;iMĜ9ogD߅įX*$~B!e&kr|xX&pqqit+ϳmǧǓ^< zmf  Mwb,ä=O8]v}="irk3b?T0E<se\+R 87^2MrgN 0qsI՗ڃX^*9ޜ]Ɠ4\S9\3ol-jH:~%d NӉN޾׼qrkɂb{ 6zsܸHg_.P\hS5Fء*Pݙf'kIzbjpngs$;H'5TSWg6KuG(2[-2\~f{˵9_j] p xGzqB mKxvldNxPw&~w$*Di.8NCV1z=H8!]%:7UU>[Cap,BQUV NC6.n.zԬvT MO1X;ظb8#JIS`P@y[jJ&`,rN/İ5:}+ƐNX5F~bisQ1]$f669FTb@b1j1E֪1iA,UДa*KR}*(-ىb7QB&jBcJta<@>`h9G0d*|Qi+sWf0cqe׹{;_3m>7"̸|kU E%AgJ)6d hA=i~ӻ }sͤvi}SdꚌ[]qk2kZ.g`9"`aSHE['S ţ,Z߫@R\v}ٓGz'ugzڙJPw"КR>YUUVtr֋Fo=m#CڟO3Fԓw |ӂy/fY^pqz0_>Զw9hd[o.[>᩽>Gxdʫ_:Th!hӍN4hZM[X]+@6s\e$mJ G6S"ӧ_fjubϒ1Qgtbc%]MĔGVVպ<%|gw ݃sck[Ⱥ19N{ZKw-媨m4o-(ozuݻsE޵d7f /v^֤n=孼 'JKudLRȼɮ/sW904pJ֑1ŇF*4wt%Ӏkjgߜ:]}}GbZ,L>;xk8sr\t|zyp }3h&&Zj@b>kKGI!0n%!kHz^D/ ex\t})})7SJlJ .0\{H% #&Tp"'ͧ3}ybE_UB Jsiٹ쑼c&hqO}cRsPKs[6-ιBrnrΔV[/ ˣCo@q[VJιbZu=T{xٽ9F#?1VQU " ) 85wub{gӬA~C*e`qrJKrn:R#RfQb-p\RRy(jѬ'<8o+X5 IJ\(.8-tAQtZy0誑D)tbUKWܾ?QKP5Uy)(uLZeIPP=Ns#\32}t%@-^OTUZbf(hjvٳ-sn"^)K+`YUs#GQb'P\۲2hf |sWLq*e##YhTl7AXEO9'QiQX5'S1ZV0yjuMyI$KU"2S"U/0=[d)"*,F.X`DY|&+lSEV1ʳ>M{ZShBR-*awkl6W*ʪC[!Qg`|Rh'HbgtsmȭlcH L9JE)]XIa؊_VeNbVH*;?VA{$N䪒"P#jŔ9)S,d1^A Һ Ag*8Q{+Pck$WFa!/z`dUFhYEAnkn&/@1bӉ`JRV*ScFU s+,WbA{mFX%V'#xWM!UKuUN|()׌&@ކXT ّR}6[b)c!hQj3KL4p)n򭱳Ix s>TQiMc]RD+s6 p.D%7L[X2bH8-^J2(N!#B..FZc"q2y- _*JPDo?plZ5`eΎX<]3 Xf,qwD/itFl0 pJ/.BcaE8 4)Z}^ԵE6=x27i*̛_K4@/yr0jTX"@h]yݭ=P`йtە,j JS8[ q MӖ;K)w#=V"8 M9'^['.$s+s2,*Kȴ,vZ ZvSݮ{ S5NC{㑀`(}=M>nS(|x)7vkƵ4Cw雛|SxF[zG{B=Ҟ]2 %Vʀ7q(M^(yQ‡||pcM셊-%OcJJ A3umk@$E^ 'T@R.ΐ=!'BEH#1$=+)VehU5moRFMh< _YDwU՟dvMtEwym|thCSIƈ<v4_wmG^'ez:]<| ^Q!kKdsԩ¥ @yAU{TT7E)3j>UuKA\^hHkDEgHVA, Vb.yp!(>UF![_ٻem:Pif?<:30[WG9rs=dKGףOw\};ʟ% 8bs *偡3 Ǹ#* \N[;, 73{>{.>G6Dj1:;p& 7ݑ< =ooGm]`]H՚@8aR6lGz3plt?pUBHfN2JLBk{!£KblVrc y˵@Vd,y5ɰ *U,#KuD'Pfn<_̾&ي~Jfix`hoY:0H8 3z A'#<Y)kD48?ʣ+;NG 8¡RFX7rf ` nOsdqԢ3u͖~We`ԋOX?ޑD3,i%-a'۲N*Ka)d-MܦNaL&PQ#NǒFJR#=|X 2V6QQPl+/U,D 0 _%|BC7m 1G$<睭4'݅uu[wa?~.E!\!C :MX~/WC*jpCjcO;*1rxHa^F`?F_U_cd ;Ե#%a2ZVhxFќFsLPg=wFORba A>YI E/LLo3 SLx ;OxJbYW"Z.J_U-/4RT)BNF;o gFr!5a64*d׋_ Iɞ]sBG06T} >ՊBiUꜬ;\0 TBҊk|>Zm+J %E?2U"i/hȻ^₁"XB!<љOvynrWd~28WMLDs6El&YY<NN1XcS̻d}޹apC;-#~A%"']I6pVP|6]ªOJAJ,:(^â2;Arp +|E( g-c w)`̷oG*tQsXs<zs8 ލ0Ybd1Kůyp$x yh!yҺm:1.ƺuuYc,5~90bd8E XO<g(ԭlQp8WMHaoZ+f&Vn(K1dș.H WSseD ]5ٳYDptF[]=DA;3={Wv @=dLw;uՎ8GP3`4Ҝ[Z"Kv/h0/Ud{a2]1ahQ|ؿpH$$g ?)yP}54P]Yt; Gr _#D8绒 \P >;E/%v:}wüKLN?յ8q7u <dlOi:_$0w/"q#@ta_~2_QXcА35hmCĖw_ܬ0(w"+9jpѼcvVMg`jxQj6Ѹ #QtZuW3tnfżM|70BbnRƴגsͲ@>wxU8ʸ\Wg;+ѳ/cɤWsέD!qN|*S kGdD"iIZj0{Ie^jDu_]˜6pvtͷy &\dYh<Ԍ 3d6P9vrSk)w1e!=VLߪvf!"gh< _0nƳ?9Mწ\VmTuP۴.sj3Z9pZ3L3Z5ǖES᳌h$*Db=a",a땗PeFh,"+CJdƜЈ& "_1)[dv-Gggcc4Jav2/)(X)МRBZkG)|q)R<$vmP'eaQ Gvc8W_Jz:Lɜ%:ӌ17Kk8˔SQѭ5V-ձ-VWr߁Dj0vb4IS)&#7Pzg>caưM V T6=TTt^FT*%'$1ރ" o"CC(<&@1.X2A3z%bSr"\Є9-MIt&b4Wj)K6cX dph *0uaհ&#JŲ \ YQ.e=+XbJ a) s))XppJx&aHr'Cvw%x.| ۺGvhSC~.j> /7ˇa[Ld t3{∼![`|OgW lK~ȏgj򁠅o8Y!ftU+LWh4@D<{pW&&|!@x`HqHCPrEo%7` PZ +2@0b S4/5̻ ;!$ zT;/(<iD5iBw bfzA$i`bP-}]Qg}R֬ G=T@)hюT#YF߽{wƕoy O˳Nʾߣǧ$hA'܇,0ᓭq+Jqkw%OﮚųrU?hrw8B9Xw+zgNx_(Cql-5 8Q`&4N[& ̷u '`@T3~:Ռ%Wmek!Ռ;"l9(.ϮE U3zeR(oZ`E\~ni+77 q3D|b`YR5AxF8a."(gx̲%9PPqӅe|b9$k6; 39@]ױVXkEXdޙUB#꟣!w$<>PezϦiۓgn֐w\&i_ D ^߆jBܺ:+$h8g;0,,g;o\%N(^-E -L^PqBr%*VF'LRVۄ"?v*&xASHY*Af+aPiZ/uR~vѿؿP5%3%|FIܑzV8Ymm}[]6ѺmU f`vNS/{3DEGI^}lJ>uV? ~^ϟ$P x}RX^c,}-~~<\zyQEXm.UlWĄ ii Rv/B/~Z4ڀԘW櫻+ yqB`StSRTaXϕs_`!5w=/[+Au6{G^naܹ)RK9Nt5uj0<εzXxW ޜs2ޯ'K$@ȉO;v "AlH+.4?1nW߈)N$J$%$: |*`]VQ3yMB.tD\ޝ Ďgqh<̂ \u}e IZ" $!- +dhvx>wF2o!Gqfk /tS)Tg`.rԵBETJ`Ǽ3PWȎxST n JJ<>/DPX92HQu&/J*:g* P!đ8hXyH۠xle+-`?}aWݎj{T(!,6IS>\#!Nv6a:e^RH )zĢ̂7"%/۩u=C@5F:$M9Ț4 =| Œa鍓<ԊeDk F2nuU_>bd6~(܆]۫>:n 0{.\Sz|T!u9P5^_ RJryJ%﵇Dk|8֒o۫K-,j iuʼn{Z̦Â3{?{W۸/ܮ<28LFi"K>dnq%Y$R&$EHꧪŹfgj@+ICaps=78)D*GϣH?yVm@B;?3c^3*PBdo{'%:DE9z#K,CDIϏ]bFw3 jrA[CS@V%)_|1n}EE/30ísk5{-xO?\Ƅ7Fyh`iU|(8BIa0-/tN6ՌV6 Qj}uvXtiGvCg7N(QnHCH;\ءr#6>g/gByi<2t4ebpdVݼ/s.Ii2 s/b`y}Q]@XqEMU~yq|uרoSuő[mU$0 A`n>(b7J|~Aʫ2gmR)&jDŽ椇W+dB J,AqiAn 4%*15COCJckA@nN`ʃv\/EpGxH$`v4w U;KR NGŒJ0K8r6UR(cq<=E8He[*ЃRaNqWDn+xjIQ zT\#UݣU((RehX.B7-x0[L6 b $E'#*M*-=+뎵.$ aQ&*(%R jKa$<`J=F/dk$.wj5%c,܃-&k `i / _[3fWj V]ZzZOtq!`")ɘ & Qb42T3d*`Q4.^4G>fӥIvCc_~p̍0,)p|-9~Edf ZY2-)40m=]L9Ϊrk e$/N'`47Lk.GJ5T*c"Eb^gSݻ$t0k9{<%T) P)r-5!J)f Ta5%y04,<\spL8QL:@*Ri!U3EjXg4֚>CR]QpkʢiO[lw P!Q@wPCO~4+ESS`b|XPI'bE :8@gƒ (7~6=U9pf_쇐Kg;] Rۓ_1I{zE;VwS?_~ͳoos|DE%KYct˩JnG^"b4Rf*a\>Cp{fN#W >rD=V%uBќ zʠrivDivfh* Ge)'u ȗdRnGօµsS2FB$j0 %rr?ܙ%xIC C_>|h4im6+`'l6ko33e [Nr#$ ݣܖS^ɣdg@;i5+S5ftd oL8IĞ]BfMŜN= Xk-rkcF#I?d,o $s)y7§o; 8+ƖfQlE ӯ,hlި̛T1-w=V,_f|ӱkDq/Z_w\bL#1Rw.?kKEQDT N1qsrR6~kD`@VW-^ l_;ΚT=G*s9 ƺO,g)eGaK=6PKN,'ߔoΙO@לtO)/kMUporDW";*DZ.xhlؒ0ok%o;+JۖtݶN骶CNtsYJetmh4+02;_duv}/JWy<8T!yt4h_kZ [p?AJbwcw3f5nFϷvη:΄%$ ;zz~qb*d4U]V䨼F-lGj5XȹR+_%k]M)}5z(Hr\Ĉ61!JFg[hbRߛ;v_*8ߏ/txLgOqNR6fE=pvswW #d[*f}f_DZWyBpu|_ "YS Oi nM N_Q_\;3"le1##Eļ7HPC ʶs?uHj]}s}TRwz4!j +Qο(Jw[tSE4VB~)@UdCk)jjP$c'֗͐\`'72?{ƑB_ sXƈ7/ KDiIʉ7~1bϓCc<U]]U]]u4~' w ^cW_}3sv0K!Z^? **CEZcy#U V?EG῿J=߯Y= }=A=ڑuOSD_vvSNbWKjM&puM`KW=ssN)=5{ngTA|LP:R韩C `Sԕ6>{`KgWqhFYG@"μCN3|_Ƹ?++ ۮ6vAn}C&CzLcݶ잍j:np-XܜCrh,V!g5jyYgb<͸;NQTdjru\#>0n#VבhKF,q,d!z?]tW3 V&`F|<SXJ#pn:~VM&̿0n)οMxL#‘>K8Qf1ħjRmIXC~e*"61]5zf2y;$Q]G ^ yNm0n}5~ym՜ ~Zg{Sn Uy)UJ(z2j5`9:kj-YVuU[D*FhL[+fzA-9.6< CU-8; o],*c݄rcfòHUҼԼ0{ǁ( 2W,ȫ 'B=f:v1Mby\`x g%Ges?{ʩ6D Ϥ ʅRD{"96\Z`O9ϰuɴ„~mZLB.~9bH1"O-$R1E+L-qpj홎-$ݕN~ `%nK$RJm?OKD2zJpXe"LI0iMIwC5'P_SqEVω> P[ Q0z!p2!Xy%J'uQAb g:w)3&ڤ̤Lo.y` N* SRDH~hlmxb&!׷Tt |.:fi jQI3"Qh-m\FR!YqjИz: m&T3OL6NY_;5ISQN80hVmFIÎ,mQrDgҢ[F :N?~O8I3)I*F$_f/\j|c5н<AI[WBc %4v܃ey^>tРEæm˨:dX#Ma5_$ζ  7Az'tɮ-X2mb%ր1.X ¯.CSqVcm5 m5W/E}.m9'86joYUUa"БZ_}IWmW%`ҞŨc}蟗kdK%;3;7 D7=!0DJ69Nz[?¤κ5fj! -!,Y rjL^]LNs jٷa'g|`,N?5োp߾{i!R;攇]G?<՛i+n\>I3oxֱ1tcFkK5u~V`<ݒ|"DVww A5 Ftv;^px*Xڭ@c[hҒ8{|d{|AFh_lj#ڙp{pNˍNPUy!C8H9+߯ͯ4Y|>`Uat99ln=Oy ~͞0 {Fot)WnAԂfiqTם{W۰6/ɃԄp 1?~GxaP{;J2Q8enU{Z;a/5yBIn"iDZЪ3`b'B@;ksDV6u{PkmlȤ۸w5v52#R~){~2JQ:ZeJOWXd0N~HrMH0vI=E01gCX BLqUeڤ4ƀCg7Ďz.Y^jIY!$DU<V zl7i<ڃ#H zu:)'l8Q']m]ݹC!, fU'_&\HM|75L 0 < 嗒`ٴMxW2"DZH ոi:,~7 w Gu^X#4;lEXoH,Al/刭H)8/ATSLtImKŕ\Q>HjzT:Ѧ2#QЗ(@s*-h~O@q" VO8J8Yo(bEV!%X3VoA|-eo(#Xn>=Ӎii'k/vF̮> UR+Y?#T"y/3{sOO!kzx|=MD.-aYfmD)&AY.'UEylԥ(JЯŃ`&G6&΅ӓ&_$/3O /5F1`'4P?"|ņwS^,( TAӂx͝]9Vm^^,GY!tjG,8g|g?<π;/X+IXXWDr:%p4NϾ@,!>[#9)ݶ8[!s؞\a 8Zp] 9?7 "56V6mnlv}bS$Q!,š*eR_QʿIro&-ބśxS[\wK;PӰ8W]-eHf>_'V5bML)UJ"̓e%˴چM=/c uy sO&4M@r{3&b0j$Ď.GOy,bz{p1>ܴO@P}S+|WNmkE/w mn'{hm5}HI$+&p`"w|9eth@cJq⅊>#DZnř8i[oGѻ`d&Osݾz fÇ||wחezַ̿v{nn>|xz:r`<Ψ[)3Llj\i탥j ]4ާv{`iI2sȊw X ֝-)y ;uwԾxv*9Ooao v:kgPə-p,:B6{KJp#ә@XiXT8 `  p($>#1RD$Rqָ+dN[VK%$9# 7$;Rϰ9OeJ(t.}͏㛯ȿqWu3<~0V*قg Y 9h*RÏ3=o_qMLζ+r^ "s-,@M0&Mu:twxq_ws4e溔t,8)'Q&]ğMyNkߺV|z%5FӺ%0o z~9w AX@i&T2t,8 Nǂ)SY_H4z"VLK'"*OУ. `ɜ3T*;fjթ ˘륞׍=vlgeXFm,on"1pGi ;س]ۄJx?N(nh%8NlvkY]4n2Nߴ~~FЬR5 Uw qU#*d틲 i%[dsƕJU@7_X]ђ <\ B(9Nj 7^[}E B56>wIlp3OjʜB9%V'w(sg6Sm,J%qtz2ƥUWǗ'gQyu>GcɹG*q[o0ԁC([Q;Ha`"6#H`2bB%2(TD:2!̙!q+;gTx8nȦIgB?wadO'痀)y'&B|5GBsW?dK/ۖ01|IaxHarq~L3PR/1sbvT thSMW^gzo/[Ezڒ^r:rqc\DZRj8b8R0iDV1bLiDai(U(ֱ;(ќZ Ll;RC?IG-i5>'Ȥu+$C]Al >oʊ:xd8lDW{d/UwGI:',W O~I$ ; >}ho \-_[̆lեٷ[c@7Yj/S<޵g>씜 r0)Àf 9|,`Yyk`CJ4Xr 6tkx3$v1,|m1x+gQn?Cyi/B3r2eOj.jnj[qFޚaՋ#7]6Lx. DNsQ_*pa="b&sY2I܅:M)eĶaÒM䭢ho ~_@N^1FsY.cuwPVа17 ;5^F]C8x x4H)f #4,/fw)E;ɨ*Q!èIk{R|nw芃y ,R.e0g C묕  WVSFoPm.*Vn$ |r- ᢋx%K%4BIQKi-=O2OxCCj㉒\ Ҥջr/42_E.ӊcU^ط~~FFm3ۑԱfD %~!U_-f'7HDQ$@`ɩ|e b\J$7ؔqdfTWg3LA܆HJڍo'ٝj5%U  p֝ paՙחg+%UE:zD >ztFip8.fh3f+.{ ^dJoܝ3BRM<>V_O̯n3%I%`\a,g :kD^9RY]JsB 7sxh-A:Wx7pbRZ^ f8Y_O.3IgjVdXXMum;%hGdTC*UބYNL5pEK}uH8S2^(4TG[14}f' i3.C-Y@)Yth6KsȎ-ufuC + I:xiz?r|$՚aܷb_[$&U4~'x' t{L2VLXKymŎNRoK0${uҮ@eP9ҫد$mZ9W7 $Q9m[`)pDخjJ1*.z)3uR\P\#Tٍ?"(c)Q,ixL!:OUVqZՌ֯PSA$8SIr&x,ƋX夜3D cG伥mQ|jnz0GP*|R Vɉb'KAEhTo?WHIJ(J' DIIymZ_QQO*юpu8ßlT$b\{\ΘĞabHDIosKgQ*FYj Zx1NlOMYJc ^ז~ n~Onmw:^ᤏѠ"Vujj+Yw /;d)::dk 8K5f5E T?隫8 '?` ircr:C9^BPhc,auiJGyO}]r&oB7.UJ5NWLWͻ 5ſ4UDX(_gL u ^W쓆 3BS J^UdkɄ+eA^x"/rko>(6zlY~E>>?dҬD@thJs)d{Ǵў T}(R-nzyHeB{)t G`3f'>ZvnhZ np0+@G֐<R(X(fi0+@gS볟r$@LfU,gI? L}O}F3XQ:EѤpxX" da BbO})EǁRC R;] Ef4w\=Vف.Z8i1I#[dGJLaP;g[ݨ;bMSozx;۱N[Ad$~lBrЗ(D(kaHšOb:0w25Y8FʑBuF$Tt!`g/EҢnn%ݻ:k,e U>K\]{V'T18 6?84BO\#I/rHjߥ'{!'0DtH% Lm@0d$?,>RsdJq:G2 oU{6@YYqr.u4|Bbė`;.`ÎK?؇A0 Cak  I2 cpˀ ~%`C'-ÎR~#mJ[} גnEܱ;~~?dTQ4=!7B$J]xL?qm^i=?_?^Ao%qzeb/i!tn<g0Շq-hfml=^~AG9Ø2 o{Ng|\֩B\aJ:%5҇;',ub30V\#)Y@2V1 7a:ުt,@TW,Wi%E6bFS?:LtZNjHkAclJ?[DxgDfv^Zn84C1 ,]5EBSϪ(c?VL MDeX@ K뵎uIJu}-.LVh e݂H,+= \#av񷩐#GUf2%ڗb㋔xFe-5: &5xV!J+}@%v϶Hpֳ't=R)@_Qo &q+5.`y悞 'Of4ҜqW:މvūTnC"s ]# N8՝wQPDuT qO^TBmCǠnSmPQ߀}%!0(c6wrF&hT%*keMz:Ic+VY7\OKP<'Y]DG -( _A+ ˒J*%ބcM "PT[tfSXݱ$ F5حUZBk\[F"0oNg ('5bC<4Ƌ1Si\\eT4_8p|wGw9KAxD][oȒ+_{ӷKٗIvpdgfoN$ّ\~%KV$%28.dXpWJQʌc-@.ǭZΝw֡hlo:,sybBeFA'?G7Q)՝,]uPTX &iN `龢 ׂ:L z I1;>OS(vԄoM|_h'uޕyr^^J-ηwv NKiZPIh']` E\ ɴ & #Dí&|Bh%cقy "⃨@mtO[cItY&λX΢ !7d1 8)N+p89Fx|sA^p|4R)[#qbM8m;p(]7;q W2cD5$UQPAh[o)ɷdZ ge(8]\-;oxR{F2* lO3Q5tF7Dm[ jqcYJo J[ Cm0&*/cB@3zƒijSZrdz{+zPq%ɀ2N} x%9fHmSfihF[6µPN|g!'_dHXYAu &f“bVlǽTʌ[m*z8lDZƮue12k0^|/hGI'd,,ܭ{ղ\w=V.;ѐuO˻Pn1)9)B6RJi e$us!m3}XBIY|}w>w kx/7/Wx×RVwߟ//S(x>fF_o9K3B^/T3.ɜ;~NIM >w*`@}-/}@5h)צPC2+% uȆNUhqPb 1QXE9PDu2vޫ` ΚX<"| IP8˒0Zk10Ab0Cd^wW)e-FRe4 Ni"QG,Y6A##iMJ.J)gKm+F|eD'dCpJ䵢;foqpQd5H.s^L1)BK{SeXG-KYM|@Fa(.FZoЄthgN`2{N3ʻ2|-~ izsQX6HǩZLY`H4Zjnγ* xpBD1F I[nwY"HKI0 6Lr[*EH>k@0c!oQGe:ʨ }b\J_J˭wHT,R~==lBJm? ;}-:hkKgȨl 8бFsΝ(LL!PCqW;LփbUzZ6+iuH@ëFB2k N,Re  OqM㕲ղ#Rᄛc1Mv|img 8:ZO(|8YUH9@pJ䡱Q!1Z2(TR)ib#mYP`߮,vRPeywa}w3.WX.5Zd5CeM‡ 9 ?_Gg ;ی6 Qӛ/1q.DTQ/4$;VLޜW.P?f(q4-wQq+P.uyNMJ1^C-XЄ0Zk%RQ ~]>ބ}Cyző[&;㉨PD)Z|$A[-/upX ]ZZ o{ ѕv%-qKR srޖȴn[Ev͏3]T-%^gV3m "~){ wBI#H&RyA]g SEL]~[vV_y N 48NC!B vU7`6T}:@ͼR{P5\Pk)C'4Pk׋ ⇀r﯃ w8g.ܢ8t. .~\oo?n/_9O606 V7zdn9FPiQI4~J)s1/nmR?x`^B +kNKi(RG.'UHmR-^5mu2LI`-Y  )vEu@fD!\:-%TZSfFmCuɈ1zQB[DX 0+ݭo+{s jqK21<8I!`gEZ'mƃWS2~yy~o×7Dh A1VVC,(L--qBnBRNP4Ϟ=@A š h(hsCju!ȂhCTujf8P5VfPAXסP EsiՌnW-[mP-ls{ɶ0pedk4o}@&jpA=^RKazOXxGUťf>I<#u%XLw}aRNH_9&W_E9NL++c_sm?)g͑y$Rti!'|푣zA_0FGr $#](WU8 =ϦxN^.1`tz4[2F F*i(Paetrљ6mkɒK DnAP"ױ k(B.aFG3)9xV^r| nK|xlQ*7NG2"hδ${yJdsbNfy:AlZ4æ k1f}oOY *!o=2Ř`7ݠ+((ga: d:a۸L'_ vq6zOq)ϊb唹͇!S=4m^ѾEkGE2hjOGKdH;8P8#ʱOGKD`3;T5֗ R %wrcsB U9 }h$1sAkNb}8Lu)ߞ͆T$2( 2>B+32׳"cxW;xsc=U_0/ޜ'nRu sXa}%oW>r|pgO^f=M,ě9>U`DKO^PyKaAl8׋2N |R?w!&EO=[*Sp#:oW#xEJ z(>Z!&O(趆?B(5ht*P;UmY|b +R֧j&P-jID"8):3UT8)R^-cKɋf@R9 Mx x $H\m*SI9cd,MW5TTB` .GʔiO|1^,v$uo{4~n9B< =_*hzF5n_/ '=+빨GbST-KxHۤO6Sפ:(Jy iK5%|Eo 97+ tqsrOeKpběKgqv59)T-=ulQTeMKb5CBox䚊M{l׃@\ ]!@cz1t:uii˙(O}.ⓞ6[*f]5[2J+^6pA1Ǭ XEB|Tb|\r]DZ ͵,X*EhbMWߔ^Itmv&[;J Do;aGˎ Qڞ jVØ9\X h=Nc;?w鵇̌_V MIg~=eETBZQa2v$”f^ BFşHZ𚘀ޕ8Jw6D̑;f&F\Q#>5l,9I)ҽ;wA!RSSˉ,8{4ڒ5l@BƂf32dS/\+Vu)0mSpb!>0~"PpǕQ?bJϛu3D|OI# 걡{ !bZ$pChH07E^®{ηmc(WjЗ_SZvx,њ%zlFȁ4$/Ycu*qk(ӟ5$BZ&Sls)J aʢ,mY]sFWXwy?TZI٤։Tf| e+H C.$8u{#Ṯqti[u;~. >kkh<28}.w. lYnY*}@dϏLçm7!4`2L`㙙o0*J@h^40>agWr}<'gw<}͛/#pt;>rl`fҔN{@F_L(sK?BwS2q1_\'&?4YoyɄ&Eq3/Fq8.^n=f߽'#2ӟ駻Fy^#\0)3r=tcQU Ez6d]M2OyaJAB@l) HGb3.!Flpg* WoeX.bLCQ@Cf\ ܢ\ְ3L,uZP"p3R#F$snPD%BpL8C>ᬖ2 3Nr% 5UKsP\#Dh1MVYdǝ4/L/|Ҡ>EE  zmFA[NqqwmQn V^X9kN)Ե' /7f=RsjumP* L!ȇ9jљ7Pkדσ2 k(8A;yGlsTJzkNa]fh8f/nixpAe'C۞RV0 U{}{=)J\+& +vTw 9A#p dlMjeLG6ނߵ??XPu,ƍ;(Amj JKX5oݼu$J)rvRs٬'Fb}Ue-tc2+ Qp%'Y.8bW2~%i!!:d6-VcVJ3:s{h6(&Qa2c@F(5=n/ռ,7lؿ /7jȥh rc|S;1hŃ\&Zc7f .|]Պ"3PZf|FYc^[@@c`k^"e!B- >8?[,B%j%u\OFK\cLQsh޺q H쬔 >{[7'Ւn j}ײbm} dZ!Q|#Z]hRyPZ0T}&W_לFWjX4u3Keq'X 5Yzdre;ytg#B$F r,2Qb0X|"_2&d2u)ݱą~[ U][ 3-fh1"x;wȭĴlٟ61;ym \ݳ\\.}H÷B]VW[V|]Ry5޴JmD4gZ 1l6jeR[!aIp%*H Xۂ@QzZǨFZ)طU%Za߫4YZj%Tu^xY!L,?yM_Gi] 1B);p[a*HtwG;N&3?6yƂ)ɋ;a Ė k/L&?Ύ_VKR풲8B'P?)z@~; uLRO@ 6 'ƻ9:/u:mKӫ϶+\p"pa1q䴎%|ܔu|Tf1:Q8DVƐ!,we-+ B |ƓoOWk o##kPYi8g C@2DꈲXHd$1r:c$c&` QfSU,L0_F3GZZ<حJ֧6K jit6\I _3ҵPbxkbA)H0d!qzcܕaꓶ.@X]ib=2/gΒ^ӻwya?ZlnTg:(%p:c8ed LZZOO{| 5 NNu*[¡nڦž16$fi1dˤV@v+44.BuGz!RwV ^&Ml "nȣ{ h/"aKKG(QQpa5&1̭؆66K npb% -"x3ߟvQ! qxֵ@zg; 4:\=ul#0nSQx VxLl]A_?Uv|#ͅ<~W7:ANZfjk~J:#nQp?: N{m5ռbT睶Tܱ6?T1 ଐ bwG$;(K7%~ܟ,iE~e̢wƾr2//75hSQr) NPX&%&5"#Ic YQR=B"4L:SLxc8RE4AFa1BG_A>5/ci! F&$0n 敮ی4;?_j7 ~HQŚ+B_)/7j iP*bdYj[gR!鐼p.򫮗iu͌ _3$Mqz.^KInLa/2`KT8#IpvqwyM:attЃFHU)N*)&^J1+L߯9@JQVAn8R\FC c7׈pf𫿿K?wC֦Nac~9(Ck%+}rDv( h; AEJ z#wiA&n^9A'FD|/[J," * Z<}(Ǹ[FCFT) g(/5⒕~-50PK @mI7lПjqпCb*q׬Y)A fw׈V*0jQN[X5`b8~*JrFJBʘo2:^{BJj Lq2'T PDU1kFi.ZM ,PQX88oսRJ#Ѯ^I*5KVr51=5qe'EN@ &e‚sf<$0hncs~x'wYBH'a&3%KOjxoJ9ט~lOm<5U4k *M'O"g;AdaHpcXp킼εH6,ЂI0V*.̇٧-TRz@p5)fU)׼' !%/yC2'> ՆL@Vjoמ}-5"dnzW.^vQo!X}ٯ[7A72З mmT_ZvCX]dž亣=[$RZכʋkQղkT_켢dǨܗR/nȢc_Fw j.Η]hKMQ%?BTO6 ՜{:l{; e8c-Xdc4ӭ{?b?{5sWlq Dh8<}1SWZqsLr~V;m\'І0*7_0-)Ahq9 noדŃ0jܧl꣝ܫū_ ZZ{}:yTf"`~K/0r 9ftk(Ng%^Q+>Kn)ljpLoJ>o{=9#דIAIOvzh|;lm-O_L*9g<{vxjǃ>q)Ps*avpWשpY =e?,?M<Pg!/Tc}$Bq` GDsb9÷5<2؟vXpfxִEj*JU9<XHj>ϝ0͔^UXB +7npR-o!\<\_>s  OQ=7//14rJJXʇ,$*qÒ$ߝφ߮kq0^Q@ >}dmX♩ԑVy$w>fDRP fwZb劻RBP,~ PT٥?ADŤpCSH -0Zc z]Ys}3|{#D4QATHF8g)0AS&n2\e#g#f{z F8Y>$-^Rr$^yoâ6^Ȣ!Qy&q s Y ;ZJ}JPb̛UQs)i̠$ [&5FC`,(xi4vJ۽F׺*z9iQS\p:jǸـFN m.7Jdm.5pd؆jNIsGĠ;W;gEr]$q;k("}%u9.YS<'Tj԰F-!0tCŘ*V(~\v@wAkgaR04!\E{`I,GY5Q B1W5v+Y@ʡE,V(Z@ v1'Ac%DTM{t]DB1v6dUBꢶ9≇j3܍#LhZ-; 6vrуV+]S ,;!.[LV7~G{yUCH`lvyJsÀ:B`pU܂[9$ك_~qf(\2uIx 0^Y֫돶q<&9j\11Ї!7%P jsFgiR@&47k!N|zWwWm@uwuJ˜%oο?Az !P |E5t$#vnBJDTH7+bLJ;˕[00?J]<=FNWw_>C DK\4w7w$31И׹o;~ t b9Qg/4Lg ґub d,=< N\F6M?4' tp`̠]m``pCnOx8D;-(Z58kC"o'(^S{EEi#o daeLvX۷Yy=YQ')n?) %d/[zLg\?p2qWu,~;|s ӵM|R6 e$)N'͎Jwek?n+9u^5O/ւ?wyk2iw]ԟWcmYA?ϟ+2{@c \ P^~z:5|;Bckb3c4G$5d/ׂްJS%[ B3u,=-^,2fz&c]W,MT:_FWMuQǒ} kN@0C%Oz8^X9 H5S.1> _#_ LIϪKc͒=jdu25.3?&vJf-$K'o_QJ~R5 Ntre*k29)ipDS-զEͲۂ] y˩<5o"H|+}a넲]2\1W:1~})T#hX;(cg9oق`=Z$PJ:&C3 @ΤRL`,N:H+nB^$K%T'Xr8cRWjǟ/mv-GW,v^5]d*ԓhs 3g?4lB&lT,eXhQ#|GrUZlǽ8suE=h1\QS\. `4ih02!pZCrm gt$c<:^*UԪy!r34 ]FT D*ɦ6 f=!Stbv:ySǾF;T3@X_mZ >nl%в&Ļ-i '?C?w14֭YOZIqA 1)8,M]yKDyAD !" r9`$w.Ά۟^Mh []maPs%*56ਠRtn8Y\]u&I0xc9:$vm7ZҢo@i_;[NnP54+ A\ƳE\sL?-4"(A*72)D]Rh]VhQC`m=)Pu) =){71VGG>a>S L-fW7e!]:"U9b|O&R6bA%@OS͡aWjvk˜F]rZ* ۢIxuNYSٹqˠvikgm'%plY>Q=:g1hz64>m*>@n R@Mk+)k 'B@wջ;ꌰNF׆w4:#t1ydל6<4:#<,y>דl~f|Q[I!W#;6NoN/,sJ`ҋկMO|;,z2ph^e5é^Y797 )2^^O>#2(N&|'Y&.L\d|gA+ćl "KNHt4rHYn$,#qèDF F^Smb[]1RR؝$ת7YM`o]ZrN@469i4e.ZFI:uyW%EeqÃ"S }s9O@?σ-sqv3ןNbx4UM TPW' G/(G&5(Q]?FsQG*Z}"})nA~%T"&6x+ FI#[]\3vkF*S^]5Á=˚[7՚1lJg[-u'n&n愡"i];?>_Zew)˗7^,ZZ;-ɍ?+ճ s=[.8lǷ15c߽,h9yBypШNgG$RB,)VK[&~/INQ9m|}z׋wqVa\t$!_-S|*glu۟p4c!E8''c|pGm*5Ю M>QOg;C^/!t$7MH` <)UA@ <= n3kD"rϝ`ק|jY)f֝jEu7XDvΚXkU#{iDSX esf4>&$Ɖ)Dd@uG%b5-TTһRjMJI*.\KUQjbS;=ku?v(E*4aKƝGDQ 3X2 Rq*fOmdԣR;^Է5HIQŷ9o@4eZuj VJ^Z&"&:OG^h )03Q[y1 $FH *O7S} 皛f#zdEޕJ|L@j$G)DOZptsI8t4 sKækOi;w+;mlּށ#D5ͮ7ksyBa7py0xqp9`:vA^w?nN_1WP$݇k.A7]Mo(ۻZR$˽-6eyp8Cc{;NYKnԹk e>T7[񧧒@Y }G3Ķ;?^(ю "cLI3'V*2ʫp5 9q .Ɛ7I!Au!\#`s`frۻrh/8"jDo)RIjcTwv ܾ/1jN MިD@cPpscnmXŒxѻ@#8& @L!e\en'xOSP_bl=amk;}|c' FE= K &^irOcz}hC+mk)8Rg^Nh|FLtԔw~ЬD0B?q߱B s ZhgIc:*d*4cTZۆ {LkϭBԴ+ ӎ,iOJl\ywDIv76[w1l=uJ>v5D !M8-gd~SLU޻6{hOVF!1j@WuİKuwE91dt_`pӻvٲgypa& >MtL4;C&uo]@8e4L+O:Y')k%u%#n7g%Uba>a8GC; UcOTD^+1ZZQǞDUh_ w%99HEܷI<{_qI)4hX>'ONtv:P{XS%bݹ9`{0"K081;, 鍣ZoI.{ +Z5)f)9I+f!j2/^T 飙f);IŧY[L?Je֖y9ӣ~i{:o7_zAA.G̞Wϯ֢*Zpgof/""C8{ B5ըuTtqDŽSBۣ?`&XН\pz5_kn[*pdu1B }5[-*k/tcY/o_AwK5$!\=-ɧ=73# -'Ss`uhA!B{Ў3k1c98H'R]H-ͣKe\ [opS6J{gwoeŜc]aЁ(eqY֝Ƥ!$^Q M18'{MrG(@ +ȝ..Ő%E# Պs\2w"=Đ Xr&~zI%,; ca]!9bς) R?,%z&!H }/ALEet=,հ]1rL:^ n܂T%Xc{*\t_&iulO3 }ffrzGR 8P7mt؟ 2y>5ĄDYC$qU_^yĄSkgjd 8‚C֍4X@yA 3W@.5bh , J$=g|)ЃD1ΚكG2K ]pNQrQH69 |C ==@8O:b2Vi't &簖Sv )Ն @ٲZb~Њ$AP!xh@ 89,X5aOԛ[!ϾQ(فqf_^\]3^:TY/ >gcm> Gkv'>c >wZ)P/5e* #ZLӋ. d-ɠ5Xp{Q穹u> ~_m؀DɵԴ㚕?,/r'1 rYt'i"wF Zj9BpE9Fi (Qc ,*/pvlZBXQ`4 >WX{]Q G)*\A49)=CR8^:Q9cD.M5 _I7_M"ܞ>a\Owjw2; 5cyc%~ـžH~擌bs_~x(̺[I}[Ld[,Czw& M?7!J*rfH#S ZŠJ+~1Ua?{|؜xn|Zg1hewD³{;l9]dϗ^QP{(C ˉDt9Qy鶈riKi+郊*?]G寞C=BZ gyp+La54$Ѹ?e4_5ʐk[sPu-1D*KАiC :c mQ9Hd5IꡠQ2 ]]nb@O_cJ;==l C;z,Щytn=Z* >8<-A'.i; 8DBxZOfYRw6Yg~=d9Zxd6 bO?S"ɔQ{i[^Um^9yKp:t/&k8dwAwxQOnQגSn]麺wK^ LPdL:3,L6*ވKι^a,+}. 9&LS5Ŕ^<ևkj1k"B?P(8/1yÈHs4Gv\BV##Ƒ%#^HOil:,=҈MAnv h=t)bjjԒUUr%t.}'Oõ5AЪҧŠvNd* IԙnՑ3: RCfIۣ ]XvDΟ^MBr]Ӎ5WLCx4%DjHθθ1aJE h6 V4߬hׂ= bfu۫we'M%]}J$mPG-:7h?׊9,M!Nߣ/mA7XtQYO(C6GɚS׌dUF|(3"Dc|PO=Pe{MFvCأZuPQU&/M=*3;ٱ+>ÏwUmPhQ倇2v Y/u(!TI&[gR`Ζ2/oK.$)ߕ]W7^z.{SP}Nq}~KDzIz7B6z >4iYU>7<~_1Ά2QܫCW BPdmɾZ|s cЁ>DUO>{9)X5 #NAwSOBۼ=V IsS@|<G&/4 vYebU!7y{In]*E(#` q "ӁJHџR V"OV.ocöu:^:|5 ֟]VO.W #|jv%J#0/U;L NXΛϮ#Suǎ.HW3iSG#U<vy7_CӒJ i`rR*~Nڬui%Ehh}R;owaީ9u~<17Hxߥ lK K[C-\(jIa ,ɹ8#+ c8 #VV/SwLKIpp;9G; 8eesjH4𒠶Z*PZmy?>B~PiiJ<t+@jH8Rv:Ɗ ~y6!N w H![XD9X(rA< >)$:\:%F \0\2͇X;@S*>K"uQ0ŸZOUYw>tz+^\k/V¥"PCV ~>~xy?_,} >Y˿ߙ_Y5XA}aX|SODRŮF6"KEjl=~b==unM#`T;Z f}{&ۉyA%{|i=gkf@~cȐ3< =Ab*@:tGaH{r]mo#7+H+|'k{.Av&0lvc${f#%Yn7Q c[zbXUȈN>JY # J2T-#G4jvܦzA>u|iN4 {;]Zg<΂Pe3Uzk*"\diLI>amRsrtG%ṗSL_mϟn]bBHEamzp7 ΥVETz@'D;]B;_M2Z]zu HҘvBN;=&@23_:mb Bkc+h. 2 @ E07D%Ube[m{UҠog}*ವNz٣r8yco_gxin8O RN{4R=MozqﯗG&r^E80WP&yїaFAs \Dzv6Y+_|/_*2~ ً᷐`g+JMkS-*Kn+dYقLs+Tf8k L>{vt"H4LJ'_vN}ڔ?MWݻ_V"ʏ9`4-eӕƢVFKF] JԲ Ic5lr@bh?:嶹ިԹMSnMJ\jk6:^4+\uݹ;9$t\BJ";MM,1OhqI-1A1 LIPPAE[퐂sBtɍu@LE̍QX|3KwͤMDC,e8/3%LQ3[GYxU>xREJS`WИDuyZzVb𻚦TPxMzKć\;G!1ـ\@0Unw)D Ch5>ݏ;BF1a)è㆑ N0qٕNE,eNoZ/f7G_ ɺN [:-_7(T|gǶK8NopVgïtzS!ӶFLKԃ]3Ϝ; f~GD)"QWVi\QKeCRPRl[%si3M%;< Z'\hNw& ':aPIzP7>s>TƃcE}+Pf*J4+Ì-`W9TRT ,3cThjRZӆВ䓝yh( !&o~_Gnbd'y1\/f}oG[gx5FBHiV牞^\?y~t. X;7g[j[`I}zX\57?6U.@eE/{ V߽Q;yIky*ũT5^i]yO~Z3Ǥ8k~ fYߎ٣iVxs %צFb6LOoo]Z[zIkk\y)\2]je\CTtq8`JC ԀV k VA+RBZ'F)aQTxx7?B;]ڰ,j+EW|> E9jUt[ gR_̩UI)V誒~Ie D# :%VR+&v⏻aZ " " " >V[]I*C/0KR=*+{`;_m^mU| (Z)X6  ^,c /?m1#I5=n)J*v*rB Hlqr:("Uo_uz$Jrbffu+`lh2`<#g {Fƞ4-C2Xc{K?xރd@ރ%`~n+ރDh.9״`u[=-R*/ 3~  R>tSĝEbҴ| }j!yA^Չ:H@H=i-]+˸FԒ jo" f(ur^B9ׂi Æ`2}Ye>/$z-)hU8_%>]~V/W ަ]OcȳԙtNh̻v}@}ʜ^JzN2{Z>>$gU]WquFX%1MU誙 3ju2X{* ^2}qB`V̂,COߒ-xna-9OO% e9 cQ :n`l?y8LɁKmn$ٔA \&?甬PAIu˯ʥƸ@@RT9TD' =nLD tÛ& a9d3RAYG_FXm*\bn`(*Z (tIW4o \yCw 2|Okoۗlm+M`ő.oʾ\b3GJǔz}q;)@+\)C*+%Ǝkk1?@Nb+f#RJ5B|tp``~B֗qNAkI˶y0.7ײ.HeLkY!@!WN#: e B7#QC"G tӠhw$`i&S/ +wEOv4y[N6z6EOH!B$L%SpS|꣟/5󔚰Uκѕ7vs JaMjUsnfԓ6[ɒ(By_E{Ƀ ?O2ʯm5O^,:}V,ѲmRf.ܼٟ0}i$gK(fh.RH1Oow׳͘W6,c ʹ9!|Ly{koR]O[6 *)9_DiZzQ4э(W/*vcQE" D6h8!6>Z$B2/{& ʑX AWtйd7-H@ -IeU@@ @pui+h]qH\ iܲ?7¤֘un/'K B& HJa$ :t;4\#(eUne %*MIrZIo%@)mq4B>mDRom `aZb~aKAJJLtacPC"aHd1*R)i*[a&hgg8v MR RXZa=_) Y sw-4oÃbl -oAxYkmȝt~yϏ{F3zonֿxR7z66~R|r.g=+Wͷ?6U.@~5&Lxcxa8(=@>i^#;~`2~l Ec " Rf=Ta\|c]TXc 7]zkHcwI+FX #)Gr+L%#H\@Ui_ LPNXxUtx~Mẽ.;! d8҈`Y>f,9k!r9S9.Lc-7n/QmyCpڵOC*z>Iܡp1g_ z\\Dh7өY=1#밈2jϕ U zRdp!7? W E))Ҋ)DCƇ*=:u<lÎ|9CEOd]RRԨ[9 LԲH}[Ƴw !b`φ 7[e*VR|i\mҶbBPz\) ;lu@V k62IPoDA(ug9\fp ruT1Cl/֪a4%a\_[؈{n8wEߠhZ;ǰ3*{z(> ,AuXģIzTV'`6W_. |j)0ի,sA~<\:\S0Wi %||0AuByZzhqDT*dңLYZAmx:\6 hTixyMYZ,32aP)ٻje (iP|.@='kju#cGRxC3yIh@iw^u :hs%VT$,)T QPS┿_r9 R(Üew}uG]c,艤_edFlBc?S\.~%w;H=i``$Jv>Nȉ撮"$,~a≭_F^~:OrܵT}rə؜"턓=UNnFt@:|5tFt}pT]'8Y/#n{(LW>8dnJ P\(O63'O!5,Z+V t]aϛse!p<Z '&虠',]sĵ]4Z}&~Ɗ]]6} lZmFiٲqUF"r]"Vnīi.2ҷMAfk2z].$埿m`45y 934T^$DÅw$2`}hCpdNt„0S="f9h4?v}ˇ+V|_I lE->㎩WRkbԚjAmՅ>]IwNxϨtJQE~8)^IwN_"cN) J}P?PXMvM?+-_wmsG.I`tm%|CЏ%aYh NT7Yc'.O ,sJCR{G+ޒ;W|KcͧKm)2I+E Jчq{꣺rlraaXn.}Nc&S\OqOd-?>OY E>mVcMܕy^\q+ hKӧt?pwwϣ4ԗv][wYV4e!M:X"lit}?0V P֊PR\}kprfՊ88M_p= _^j 5/Pgm nQOeu`^+춟Ut$jwKW-`LX@gB l9?[;f͕;q'K?8M1+n$q,V;y^avIt%I>M"jy PSsb'oR){M)a%ӋJ y6oUbc/I@fYN `A4F G9"Y"nCSWS-9؟q$z4n9×GU-؛2h2w$k[tw2~T(( /J‹zH_W 8h$>$b6uA[뼗ȔU1TN3^e=;[xnoI'2h;зb= SEHAFşcڠIVc^`1uh:5BVNt:GR_T!DP <$iR0YTc"{f*ybȘ$ KN jE'z"̳nz\m66`X3 !{J` V*|8%IrоhW) :egO.Q-_K\mЄT+xUDjٿ6ShBA#(Pޭ}Lwq贋i3!h0˥1^ XEh@yA,0+.J yt(ZK$t +Qbin.!(~qK=?yǻ >LDæ'z|?uVEvϟ?}~Vkp7]9s~^%ڐ[1^MW+fkӏȩ` xElYs ܎;VE.efh9rfΖJ&HkwxŻŻFp/Iet+q^je,ZRٶ?}.hܻhDc>!pbh 3_V@"?o?_A7 ό VCffmfڃ}mK “4ՇvJ"Qh'as2U4a\Xy( O* _wWA2SLF*b;/Ņ907&k:V\=sZ2H׻b`Ir B8l&RpY"R0]!K6 .ne^C֊\1|@Q2WS rCQҔME,Wةp=tsʋw19Ms-2߅J kܸ/o|uo.o9VVo~ b8 2E⎾H 6n,F8t$w.8 9>ғ,=b`vF7r̵um۫ xN Ps}m14') dU%KhEyF^`,=h]W+1Oߜ =+9i5w ߞms6k4ǫaE)W/ 60^/D 7r v>*Mq61^N9j$:[@zY,cYfŬ`3O+|H,䪤e劸kyLC1L.${=|f[}5|%*iXXeypQ.JE=^E,e)DCCPhVY2$H%2Mdl rˣUy6d[$1{ n4 {ibTjljW}Vhl$ viDL1/M׭ -^7Mj&_ jA+Խ=7wl1rL{mK>dٕ>jvXQUfRgԁ IhQ=}[˧EQ関P BAlXb3Hh ^U@o)3/GႯKԡ¿ͩ䖷Z1ƓoGM7k7\C4fژ=@;j+6<*Fز2ȵ^[T֬۝ nllNZOc&2䂽7PsˋX|%Dst}?Qҟǝws WfqB=`%ܪ٥6w"vjV&Q8<vX]r)TD[ CF(ǹ iqP!)C@F@Ki㋯ 7YJ bc{͙"˘)XAX "--de``y(s0x؟``&~NJܚ;F ٷ<^8I -a_byئ)3V3m7??%ZRAI`B!sbFzx.[M7 N*d/UŞ0e θkZ8t)y kZDP' YfYxO`v,@!11eh+;|p\*zsluW"?{WƑ /;M}8BIምe`t]$-0J#ooV@>є 2++*KU)E*α2f* )g\WdBI&ô+2pJ"AjxEaڤU^Ž?FD`4UX]DU]M ClvbA3YS{|ChW 8|ƨW(afGUm gfxn3%*㺍AA:7TxXەiJqBbE וN( յ* kpY(+.X*ϑֶBD,j4&HБy@:BnjN_-W1A Xv.5B3卅I1+\:灰$ pslc2xYh9h=3D4;K,9Hx"g#l p>w0 џ줃!ge\q+$S! >&ܓ FUk8X ̉t.Hr f{ln =̐PIoT0d %av?Fq|h"v_vh2v߾yT#=ndo%ca1l;},; d?Z"Lk~-w&6sڅ>k8ͼ;"; N7Qb7Ww`t'oʾxx:PQMvU&RTyۻF YJـ<>5F9Fam[iu+0/_kU07qrLN$`s᲍gg k>ean8KCp׋߮[ .J,K ^F>HojQ^0fB-^8VZ(_5ŀ5kLr9u.A ,/"E(?!yGVҹFv}ݺJIWUz ѐ7ĖÕ^.b7~)I9ArlAzW?GQK ~bHokS2X 8eԩ~6skNZ+wٯaGRrR6kF>]\z.RA%./pXMj#&G/&DT C`ŋ̆hӜg?Z4r;ZK+q,:3fqܑ)PGp*yՏ1293,63sNs0Wǘ)waK0vN-=J; 9ts|pדU>fb="p5 ëCFƆLk)ӊWi)[S%]LVeZ [\A]V\:"+dxN̐NUYinAJrEz]#ˬ,cuH"`&1j,g:0*rjm~,HQL]sϘdp>56^u!wgEA;5|1,@-㾞 C((=0whI1*DV/[#E\M*^MXd;W=_E90DdsEf뻭YsϻxFMaKwœNL쎼8KOhdP!Q67`vU1(֘HK-VYrJ^Xq TjrQTAk5e vYY)$iv t/:Y\N̕GD[jaaÆ +ƈx`BS NNVKh zU[l!r-%l=pn_t.bBd0:cؤdy\(etJ0>%QΩ}=kVcs1(rbwc }zC@%xSB1[),K,1̀b0j4HHRVFN$23Lr= `wA"[owVT<᪘2;0lqRl'ZJZ-bͦvY16Ι ^MV[jG5ξ{3Yy}dž'pG0#/F@G u-SoKnebJ~"z%U.;pG'76E%vO>d<:zbnQkr*1UObӅXRbDm~~m cKZnJb]?.q&#|?Bޝ]/ӂ"`y?&r_a+. ۂ9t/z]Z,V u-[{&pDQVpX6g f _}?y{b@D`Ɠ(F2m$a1Xp0L;4veĄKk1mWU#:r\kPTGK׵ skiDK)fSWJs"%+XrUXP/*D )UOV=Rb „\oV /J+gV"gDGa5f(!GE A m >Gzq̀OZSŪޖ{$!\n‡>@pi89*Soph ݽe;\"瑞q:yMߣX]8q7%R4)HC>1W_$>ջw//&*qɘ+oiad wαڳHXFk>7 zۗ ΎyLY#A!b'(S|k2-)U72T8;Ɗ\NaIN W(=sB=ZGOL #M[(?vx TaÍ[:mE$z$Vdŭ2 '|/dE7ݾm_Ί'=e_= _&Go"c4>7>YX<f>20zcc l9o߼y*.ƠwGQu<]Y +9N;}1N?0{O"|m'ā{en؎dznW7Ww`ʽ>GrW9_yTv *]*J[9T1иKU,lKa*J+ Y66(<>1%'Չ.9nݒSKO͛(!^=q59xe/03 dap |Xz07ӫY'me{ |uqi>_>Ei>}Eغ(S/Qq*S?n;O.C%)Ġ(4~lJɝ$;,᭷*A",\)h'KH;=XѼLaֵh Tr-\ALra6^@ÇHZs4'Cp9G&lW[hݺƍ3Ner=D7`D$=\ЂS-؈J|)wW.2-CYl-N%Ҙ(Ç?VXI I&}c6nGUW813r$pkLӕCn+&etݦWp$:hO/ tݦŎV]v3' vĖtxUEc@"cH;-ԘH=VOFWWUp=0{gR!MMifTlj!dLa]ijGgN!G`F3!yLf~;戶9fYekƨ֍3Sغӈ;lu\ Dw8Qi<0.3s$:ITqx9hCݐlx*>`UYpKQ ƻB}8%1pmݻl-Og盛BuSkd[XfvW7H P_KtqA{@e,4Rs3`CVsy2~ƲZ{|o[INI{I;y Id@MR葇JE'9$7$6xǩ0D=ߞϻ*#$]76^^6-* |[kB(:@gE*bHo~LQϷuXdTdmB]Vv%oZo ! DTBfwLp(2C ,z+-K7F6n 1Gk5 Nj0_g_ %;q̑rA9˒(;ԆKXǩFkma IkȔb#->M,NܔXLB2ؖ@b 2^ 0\&l@}!@7f/'SGwk_3z1 kRLJk籧Fޔ>dW\9/Ue]ggS- B ddRFR(vQtٽjF&92%ѣ&ڸEn?_\5HN~;FmQdMU=4c{L;^nB#R3{:J9U];RAGzZ5G(m+Ѥ ܛEr:]4ڳCS(MJ{ u[T!j Qž- E~~~c~b* nއFd&NKd|DYkf%c$͔|vL2%dQ}vSхͤ|.}E;ع]-执):<{9[Ƿ6$ ւ{~wr#rmUZuNxw3P&vgWӼU꤇w~xq'թq 3\ÕurӃ35V3F-Y%N8C4qԣU=0>*j<9ऀ p06֒'~/̖)9e7J7̟'v?nLQEuW 4gv~D6D[<\wCn'OwUWSx;#Y@3H~PYN̜~8qj;C:jK[HFv#:il0髱A =lgHuG%rKM PNWq۰{3Ls07&Р{8BCP(NCBNzR MʃA@ N"P So oTwv7ƺcui/a*6zٵxp:1#YI@{jD:;KV*q`ƒI$L:Z|ۛoP=HKzkr۾*d]x6o+ZU-Huˮǥsnf{̩-XU 6AXU?1WTN\> /O~1-۹=+nc `U+W^BgoCR7qNQq*;},42‚6xfG5xL0َw<}-Kt+e mW^_B$7 sI䘋%]s~>*=8f$mƃС%j ʌ R0it/B*Y_)Zdn+PiUME,lMc6! Rdڎ2U[ D"a bl6JZeeq@Qet?^c2[,9n3~/UyN%1r6D-EXRa8>hd]piZКn}\|+#eᾴ |L2CDIojz.Ȓޗ4JSǢA^P ;1jU8"e~_tӥl >j ϡwg%:d=XW tƹ jI<O'L`jbJ9Y *+ WABs4\q4F .Vsp+v7PUz{52s ?m6ݤL'K+Q'E{`d 쨖4}ؤIJRQ[mj$χmVTݲcd6=Yo>v t5mvJ&|NUSEqi+>i qium*Yi[ut( t i2 ;b\5Iפpèx6/i/bAxǛy QyE֨+Gѯ%VWWoeDt5 7Qljm^h0&&?o 1`cTӒ1 z>_/2ZQՋݶ^f*Ăh0={ЖT.6Jvn]`A8V-:\J漄Z7ƆJwUs}$Kh`rgW|xt|ɫɫ)TUˆٻF#t9{~? Cnwι58l,+^GRׁR҈=3$W DK{U]UU]U\pk$VTݚjN5@nhC )8G UDP`\ Ҩ :N@3b:bjzG+蹯=v5儰z:gmfa7>iP8꺽GX0D 37Z|3T \&w5;U 1E)}07}/iۺ}Y<=sg9Pkc=#'a!&~y뱪K5ofRnl U77Q{{x Jgv 4@y2,y“'RIqH0 &з!P++'=5XFY VȀ ;$Y`J@+nWҜV(gA8(AƵ5vk #OEeh VTI?)7KַQ6w`MkW]AP~jV,輸C;9:$NۀL ѴYv㬺4avsI̓*[Or)*sJbSJ:qPn]tb5Nr%E@ Rk3cqF/V8XhL W!~`hy83VXl!-Gq@aFr4hA!ۈ2x>_G9L)uDWS)#]EMTnzY@``<62,f  < +]Ral `eՏcNhD1AWyXit DۜȨ0$*5Χ2Ղ5oxLDZLSZ})k:D$v1xgkv7CS<5_OC(yq LT1^U#4efmjJKQrtBmAOծ5-&B9Tֿ!s>;ܽW5ƽo]#! ZJśO)ɂy;r_/.Iŭo^:f8y1I;zڵyԢvmÛj>$^rATy:m@D{$۫$dtԘ G%XZ i/=78 ^-1RļA+GrpwXzvm]Xjy~R~8) u7=[==9˂B1@HWa KQ 0$$æȍwXbx!29…-9_B5+8~%@A#F&#՘zDE * qmFS6Ԇ\JDP#8Nvx5WŨt G/Ai9e^o/j+wKT31!pj$Ev1t^cs0 xBCqƥ?'_l׳nƒK5KnLR ~﷣@A;攧 }aZt7xN)x@|/Y{1R5#f=rg9X1gt$!\DdJ⽷ڍ3;nN;h^S>v+hvBB^Զm@sEIۡv+ GtJFp"b0V<\օr]))S3PbVvWwrW畛7EAKg~f&{*Y 0abDD,7c!ר"6E2ĚD}Vm%$; 0hWPҁ + W#rn_."ryi)\Û"r[ο\*q/g;r("JDk;rD%gxu +%_ӜI)avH 66re<}BAP!r;rssOXtQ%U]G]7(gqq>_ (TRw>SO֓1|j"9Wel"Cd8F>CD} EIeJIRv40Nvʠ &wY愠WTWLd9&=8ubC ϻeXpi^a׀E>IU0v05s:SpOa K:ei~rd&'_<#"Ht7⊧s'#AXQmY"fAnqLHu&휧@m yO~6%UOaMfUMo9wM̊VOtFz%Dk^jzJlSmw}*岗$o\"ax)9Yh(Xn/f&w!!\D)~)06_ar$\_O:?UyoŴJe-y1,b閪eT;3IU>rehU漢NuJ\Qb8ls6w,FW/~ff;=!MojY ~h|hTj˛^,3/L%'/2/ݜx6 ^j5 ^AoŅGݖޖk2խ@.o"zs/4pS|>Ȉ!Z)a>E0Tft=Tfxp0 ݘ٧静^ūOQwVŻ&%X ^aJ./Ѕi֨JЍ]\9ZVEm*͘n(T5t%GWu=\> hC]^f:^=l.ނZTj!GY]1F.\g}eES{_jYm 0DiS(0-Sq2P'F2좳(pmdͪkB$#מShƅ gbKp1pSrrDDE$ GR^k,>:O1!a] hs0% 0X!S 4.ɔADLfs(rLZư$=)~!r_>z]^0J]^{yibj$~H"H(Ȼ_ԗ^wy9cK(O~8x6oqUww0WO^۫+x>6`+Gg߀ '#0f<ܧكޣfL8RHBpb79eNVugaɯEϕaSɯK!9z<ɫ}sWY4-?xЂmB9)KS#5G !qPd1r`SٛkP\ug ҧV! Ui!86GVV\Є9ᵗYe9,#8LXB2ȣ$Fp,E8#(w[AUPxF1n[񂊈)L iGu~M(x4J<1/pnPUߝ* hH;dXs˴P02&=1Rpʕ:AYX䌢`MEca,iRGR\{'H)B7L\ѬHu%| 2e1H<P^,ǮW 'm(1C4J3Yhl!4ʬR* ?R*x3D0'{b wybSe/<P #amN֞dbJ -l$0nMlМ2UmO('̭)F d&[Gtt[I,qq?&SdE.!B%{.033. !0{)t-bbmY ;A5J;qP"iJ^ eT0p:a_>]j5קI]{뢆Ԋ3I{dkJ#U-ND$*/wflH }idDɒ:Rmt8Zۈ7x7~fEU9V(#è YO=2jOQie$L-rCOXqo^UjqKG[l&S3H /-+W[oBT@&3!(?~ODp$KWO/ԑScՆ)[ѫm4Ckn#WQㅌ|6k0+dZ) H@yN__MmdVH_g2k4ngm'ye])\۪>_,Ib,E3 7waRټGDc-$VX",fVP;co:O )L):0TY4d!DkT)_&BۑY 3zẺJ1Ϩj{XҶ_腛gnRІ[^5Ҙ24]TRw.*;pik3>tU*/E6h5E MS/G+F(L( 2"Ym<RFJ* VHJ#|AXy&(ωeD*r4ǒ[o*wb$GAŰwzϖ'm@4Dĭ.@ J _.%Mw $: I+?r㎛2[Gh`n@#$ Lxhア<}'eJ}S~W.`Q -GM>V-ݷJ?$iAQkHttfꖽTQ%5>b+2|%7!}=( =5\"n ~5w:gViZpqyŒe+m%fɲt~!i~hk6kgED.]0f\MO[WQCO2TY\ݹwg5Zq r=Ca|דZTQ-{)6c>cǖ]X2\m9x}N0V&X_6kcK +8umI|4 }""Pҧj&Vū -PTxxJkZ(`Ȟ]c5'1 ٧,귆cbqZeii5ɲ+4'y ^?rˌQuYHȊ4JoWU srVNuF/*+&ꏸxslQ M!Y-LŢCWw tSbQNr .$OEݦLOU+ "efV[JfOcfH*(s= Je@NJG:i]v(%'QE>ڄJ TVWWué>r^987Yڲ?k51m!Yicf֬{J}PUATE9'{!,徦$3 %`+Rj4^19^JN|N5gNRCsrT@BL6B8ɍ˅uY`b39[g0\ j3 mZM͸4Up#şYnOݶMUҽEuɷ7- I {Eh)m$w+Juɮ]}_28l[5OK|Ǒ~N(ĽKPm]:ݯgvO:~U(g..srF3%!nSDűs'l>4-eX9-Ek6AQӿd[%ԅh yV.9z;[Rܖ> zPT-d h-tٝ;ςbA[/EBiӟǥH30/"{CM{5Did9G-{h+E !JPz $I*(kG9VB9hLֈrdX66^l̺X?njv:(.pk.:s>d:x9*V]+毮!JS%ErD ΅7rϙC(R #`'=WŻ}Xk MJ}ck)ZH QKQưgff2+fW|;8&Fh~3)2|*I-G;;]hD\t:L>IA{q҈&.ЦOZ> [<#W JzT&s/2tk" `V~ץ vQ 0x0r\~Aѐ`9CJIɯ?_a/A4e7#?cocu@%^Ȼ]0*ge-w =ׂrڋIU8<۰ ! &󋑢5U%E !KZ{~U#ˋ?Q>9s jvpj5ӪtNԊ'In5V$Ȏq%Bdž_b%t_ 1[O[o`<&PX^8M9*7ܛofz};ʯpIg_At'zM X FagGa, :6})딑|lrҤvvjp .lǻVykN\"vݼyAf-8=Oz3z -zl؞ogP 8YbkgA[[KFDgȟtqjíBnk"UnՒFOr6GZ)ts)i* k,x=n:+& 5Shml MmhumC{gT':zpWuAy2XZ{>?wo> ;?4gs}F0&udD8xxḽ#T3 *R4:P:Xm ?CZ0njq{Xk+Y83( [H¯$ SBCX&7ܦ1?i~Js9U8bKQر贈(W9jZc.?lWE 6<<[×RRB701pp3Y>p}plѪC_=zpOX :Q0&w37o=pb#k$qiKl *Ss)$8G;g &WFx"x 0C9 r9m42-hI+EsX %d$+͝PZQa& FcGeТDBKB.9 >XBr/ Nc q@osp2BoDP#3XqOLg:|܍`n'ǯ8`t=̋. 4R:)vڰ8E5&@/6|UN"qYA6B<y{Ȼ- cl#8B@4^ b4[ru RH.%̧cDR *hX!4PBf$1*"X_/҃#)R.+y8VfwgIDRK,@p@,ٝ E!n==7pƉ Q>u\E86rRpjD( LuFXWWC1:F($IogMh.4/4#LYTRh-vFM찎4^8B۷BhX/& Vh貘~nձ7/{V&;0>LŦ̈&2ߤB=|(Ï@z< \C8 )sK3)ŭ*=:h虅1l{&M;aDf]ۡRV ySLX?I]wu_BX'#QQxj'z4w9Jh.Ze<8^,yl?;ury1՚Y _+ɇrWzq9% zct;1H 33".l.0TdKVe҈L1!U[A( X-45$DXrI113 ʬQPr"ek#$`3EPVmtix8i?[ !Z&w? Ay;|wQ*U>:s̎t &6):T PU施Kw0VOC]-c"~ﻪŤ\\_N6)@[p۶>6/nrW^V vyN%]yX \G3x!I߹vYnhwS)'xW.{ݻ]#wǝ/W%QWwQ[~ەuSDE4Jc5.nN7h1w׵vO n)$䙋hk`)_{(,]W 2 ;}Md#K{ʜF ʺW/;c^8°.}+&(\ί.jE԰Eli؂_Q8hɐg4%+ZuF]VQ}WW7י M <9;q.̼C2߉x$#yZ?VAևzКą--.fgwï|Zǔ:)!%R)R٢Ds.zf^5Aȫjpi'ᯩ99S }CVt &a<qA8Z9r~hڣZjVnV\*qȏ`U 'N -/( w*?B+z[aD;/Rkq,\TacbWԧ?w!qᭅZRER wtHr_1 I\xjmK^/&/ T)(24Cks`ş~* h*0d_hHdgYvIJ=sɦAљn7rSa顋I b|?VIj9@iү? _Yn P(d@9l*07SVj:U–S&2afEԣo ܷE{^)Nӽ3.rTy|emA#lxe4d_"L՟圖"f ׆eV(':CDGbf R-eG'8"~b!DiG;YrĐ#Ix`~.2.29 mag9b&)9)RaA݁ QK^H/,pi ' =CT*CJ+YNR%-2@=7XȖJ6Z S͹ե`$'(1pv K"(ei$GV3K+TYCUEP-1 ~m%1 `\2e| |dhLQyC K SՀB#OXM{TE,=z v7zyd]zw+{g.Q2E]}vgA~vv`ȯxa$X$z1bKpw M A\Q#-p}Lum6Hnj2#JyqѤ%3uoa<W?D*6QE嫯>.1i(KAT1ܡםͫ^ B{z2/ pkmRKER"6|ҒXQpߕ8šG{ga }AF [KO.NkVyǵ#B+ȁ%}~~Ï@FMfޙU0"xJ@zu:Gc(it! ;Da/~tm߿y^=hK~=[L6èfLws]WE&:ʙuܒjK{n.b_,K{hPdGN ^J_pCyqOkUJftcf3Z2r8Xn.~0Sv%լEmq١`<ڋAq4Zq oK ;r͑f~GGz (5tBJރg"Qз4i%ر)j-:9DX~T }EZ_C9pL39~^LJkz4>aN~Dĥb$ }w "o{xck`<1H\" PHIC3:`5W8&}TxHi|}5|~e#znkx{3^\Tvk<8q=@|݀Ygg31g~IW(ˬ2I4n*˂1X ֌?7XCTIzۓXt)IJp2)]R{GcQŝ[H@j4q4|'Hwe&2XE|rB)Tc8tʲL\zc!LޡC=Z2Co^ Z7C0b ?qf|v}.&:MZ>0)Зk5EWQ|𻶯Vp5|l}o~w^R(Q&0* B/1q 1,#ZƵ/2T1c16 XS-dȚa7"kvY73* LeejrY"Dn|eQPe9FJ.,HzV1M %ւH5c4iPf„ `E]j!e`T {xËu׉/yכCxm ~Wɰ&Ɛl.'g$gD*]\`n3M,CzoR¢ཊY'k) B[34 h;OhȌR34z { p~ @ Z>Sa>g8("q\}~z"Fؓ{=_~ۅ;)y2ܟ1 K|Z:4=$IŮn@d>~qa[|7X>g#s̯f* nT(r|'V%欲m[V sT'ǝg Su`0 jkypӈ3\;} qDpjz'kyU}r&s J!lD1FK#qJ!G9xm12 "{< 5DLCnܘUzhOA: HcY &iA<~YPA{d0H HP@Q2#2rmE 2 vdL½^\z1,Ԋ魝ɧe=?_Bۻo>C޼_z`Z j~]Oe{~UGl U*L~އOCSΕ~G,y>*D]w}=b8cX1(\IX^! LyU -5E`GS\ZA!`.j ǪA%hq*Ek$Jw-)W~_vmEjy|;^'sJVd\(,"o9GQg̃c՚`y>]@1"[V-Q ;,#$wSA)- S$E;<-xax00D-zIaYY "I@X![ A,J$ c4F) NTR/HZVéK'[*[T!Qa"j%/8V 8|MU3E\k0>% f!BBQBdj= LbXd&CF-]ta5w l"0ZMsFt;~nILA̧5o25GO&,WDa-9GctQc% j-w2lzGGH5k]*;䪕QyA h jvnC`X?-$pi#E{Lҙs/W1t&}yGkNfV6g_v%!?jvO'|v;_fvz2CϔYo=]]HNjVd% _g/lFt a2G;=ԮG &'ˌ)Xiˤ:$V+U))'#4sד!cf(Y>V2K5c,1ֵvpHi v=|< |XCkFPKy\cFK$r8P fkt>XI#"`T8ӑST lR-RfT0gQub $A7ct{ ?Sw³z"$п6~ΫDiQEL/zS.nbk4UT(DrC[E!֗8lN =fTZIJ ^ j){D-?a{ & Nyc'jI`*Ơp[C{;Rp7W{W9E:W~LfUhc!ئw%(>m[H|2Q;F07~@L[#Y Aia$g(J SV Qjb~`̵z-\qXZ-G,ƫ |9iǢjr' 6!L6>/6F3Z\LNLc/2R)d>gvO-2*DGBL•df8Q=Ɋq]ԃ ۦK·Uӣ&4eY0ו,~_-e9QwF:O5ѵr Z 9{~Ƀ ٧9rI]TCTWj8>lR; (=(3y}y\]L eZ6xuw {3oұah~{~3Wsx Gp}%Gb%G肌'(Xwf{'/+=8#+h"qpb7<֓[P_~BG݌{_7F.3KՇhx]] j]Dv jADuvDqb[>`AxO^-z%&TVbd9[[ɵ=zKMwP; ;Ǩ#:gu3#BpJ&1'-eFg5(HZKM"!uTvVm'" *iDRA*m`vI]Myf6NڂEYhU,tͽ su`ͩ\c]uqש_}>>=+Td[`I[ tMJr oQ΄o&_˼Uf͚#Z)G0:.ܮ =nzG.[AiQ(jDS_4jT俴XnW|!blWvr".&۪eޡchRrI<$^;Jzy*˼5|<{sFhU-*T8w}-nC"$J8¥Q)`U~ew ?8 ?6evSX؀[l`ZxD$=qgKˍ;2dC_8W :! pB2t}؏N;~Xl]VEEc/EõF6J1ljYrdaSAQØ3g<'#%P<`rR*ѧlbp/# .b%j33$<M;3 /{T@ l&(m<ݧr^lt_lpmV`eˬxQ ZEPv^b}hKj IP//~5觔{>Z>M>g zew}v<%8g ~WU}XIĖF&v^ ȅՏ-ݟ ;tERJI)]r7 $F"TD^>zu4<1WPRUe-l3T0##M@X"".a5DXEct\C9 a®ht"0of8rktg%!:Blap9hj) &:kPDOb0U7GBRJTZ`M,pb [Y/r I46h4;*wUp|NI~r8F-(vҦ6cc-ViJy4E-]kJ0?l>ٯsm20D1R{5!psJ"CsIb_l¨ZZV%r€zJqvPjcP5j5aHh**& "ځkqW(g! Z~^Jj bBTÈk"(xì2` ! k14Rf&jFhoT4pvܨ y2)h"I;;sU2 z WT TX3 Ax)6J*$]UH "?&N鞟/%?} aDW!o^}'Lj,N0m*HA( ˫>htt͋x;iWD_]CxB{Wf:q Z'o%pI%?c+WxR(HR項45*fznՠBHŅx|DxKYn~W!h6'aK(;4@ h5$ TWs͕$X45H;#2#+h+tM%8g[of]F\D!*Vi[`jYokVۏzj_Pbo!&y=jh کnP] y̘u3ܧ:-2@1QCޡǝhҸ -Aݙaa_Ub{Ya"EmkvY?ncn6Ymq8R';~ZOR9ܭbv2n6Eoq^Pvǽg`5-uܫQIBܦ7{m=yuEW;Jn:Dl ڷFPͩ2(#B9ȴ qfrJ` QHSGٻFncWX|IP_TdWvcoecEjYV"Ү˕~!!&VfF/r9K2ӐRok |Eo%OOM_ځZa"x'ZĊ𾐭 BFgy}S[#7 o*ۊǷQH8fCD녂~Y'6ooyxn9T2OCf؇NZXȫ4$DGYLR:i}=J^P٘Wɦ9I6a;oQ0A'mp>SE&@ɸ8]~gFPUOur7) QPn'ti_N"H/qX 6b}AE75wR#'t~[ 5>"& y""S [M脎QEugJ Zn.ڭ y"$S=SK-1pJ[g'@Sڳ݂o)}m$䍋hL9ezn;uB11h}$ixʭvCB޸.SRtvC( i !T-J9',$IDfNخ("@qvJ=1&2<`yCAdb~<^POYfĉ6fـ>Yuk)X FyIoOm\r7ΖSEtg4(IDaQBQonj+{j6|7R2!-i@ kGB^OE C[dhޥJo21 ~Yj'[j*-Nyh~(zM>wb14y3Lz $kX1)"c>vmWv tZ^uéSR$~lWLrju"M0!|:fj! gZ5l»[mf&57"@*X^3YgrR'8^ibC@ u W֟3C5L@)qΗCxbNb\<z9w\pK^UkDʞ8VIrI{K{ԤQì={WHS&'yx]Rڨ6U y^T-0tTi81z[SEB>E6MLI%"j @_~KI `v!K[J)xϪp[!dM7{d ӆ}v.fM!0E*`Ճ&5z>iXz6xA 5t TٿB p\i5h{mJ`)z`:![YVm K07,!4XQK)/&2??ֲֲֲe fF) "ۣ9F ! DTq%Ib$ 26s+dG]] ;rJ4Nʝ?ϖwz7al/?C*[K|fǩ"E bNddBфgXq!A3 0 {WB 4E,BfA{wcg|voQ1jw/ﮆO~\ȫjyN׵@ƑB4arB F6)JF2Jʊcp*{=#1zejh)8 HesR5Ә`RyY RIj)'Zr$_ZD u%"SI4F !QR%bAcbddpDТ4-io﫡-SofX WWJcYiɞ<.kG GT"ڒ툩%Kl;bBNhGl%9BFR30>1 7 ݏ0̰IQ*#x^2 ;BFQ0B̶Ii!-yd$m{qދ 3_1 Dυ^qt t}./AE8׻T3[YJ_B'҆AS]"G(`܋t]%d\Nn Q_;cIGiY&/m^5RW3ǃ8Ii_W-ʾ~_ 5|PBF1ϧ\+K"Yw]'_!H$#25CBC|s3@UZcdP*uO&?/0"XϺ4VpV +$ہ& 7 Q $/ӟ6DɒlW]R6m7VvA}Gx.xi+=0Ì_nzEKn_C->./.#U}uAsI)X/֊5{5X?-G58ȱFTŎ3nrd"#+<#$]K{wob ܜcX'l!/>X["N*40'f}D囹!pZ`  t 'q}WJz72TNjچ/g'A{8" \tV*Ğt B*K/~~ih) VH])6,&L:k&)CZfWvz{(" ZkVJ*Gꭐ\*R!EAD$T"vbxөƘRVFMc qQ_DJHhMŵNz&㭓 K X e|o,'[95d˝Xټ~(2en݇?jC%ptŊ眣 6NP8J+O b 1uM]Nk;4SRt̩"ff1 SR§ tE!؟2L;V>df,E- 5qJi"Dn@{VC!=n,G<8x >A6> ދ6 /Bub}B襮J@*!& S0OehZ5SN:{NZblըM;j)ua=]Q-Sl}-d/UM+׾?‘uzp5qA"ES&nmօB((/OF؏b_S3_SvZT]Xk^ڞ^`QgQV j7.A2A{ڍ1Z- voy8 nE<[E4DJdCwD<,"L(&$FXgj߫".Y:tӞKы EetB2%՘zlX^;/8pJǶUIƏmU}m5Hѓec!$i!{Jty-/OƤ.o+be)A~. +g}vwpk6ٹQ K`kk\~d8 Ps۳%fTfaՍ6' /V4gv.A~> *]}.\`Yz8;Lɒg("J,A(1 RTpˠM$eb Q])ptY9*2e`{ۋ̷lf~ Bs3|՝` "<^Yp,+"1F,0Mj5W΅7ӧ6qv*Ttg4(IDaQBQon{j6I>yJ)(]-BAeI)lS 0b & EiBjDIc%(I*fF 4.'Zxd1IZƝkd04@eM(DFJa9IUd1U) DP t0J* YHDĦe&<"MaLmaCؐ؝O֢:&+aǩb χ^H`fhC/<.abL7N5zwA8]넮!q o@ ǬBI2}ϗtOX3{R Axd־=$ҔJFןϚq\~E8oAfN3h}}<籎+ϋ/kys +\}>3?Koy1,c: ys[)yנ@߽zkcr̬$r:2S Л7[ޜi:RYˠfcc}!֜F@nCy}\5syy=|I:axY_%",gadCnkcl[OfQ@_Wޖ9~UMGk@jp{hG?ƶfmu=vwR*h\[J4#O? h|0 =S+u>с.3sfqϷwCR4nzzXn.y;P^+P7؟]oEx( Ss0 N;>FQ5ОI(\ yd ZxdCjZ4 AOF% 6̇g4 FER4 A|CP0EWmA}u\8fsƫv`o8?_c/?ٻmW4з6䃏㦝Ԟ=N$Dד. %^$33% mwr'Ƴߟdlbpm`쥧jvlڙ=ϥjZx( QaGin*;:!/t(`o7C뷓bPJ/ y1^B0}u*^UWu)c%a>`*/iϮwzQ:=`A*,7eYFgV[9Iyrlq}ٍ$Z8F*ڟ{] Xvϒ\`r(vʅgi>`JaƆ{ #Mۓ|>*,xOsG&^yW$Eߵݙ|t:{huC)nlbVsQ4ˍLE|sd bYuvz(eZQ@a߰U鸐1]2(IåJ*R8cLu D*Կ|w܂Y2_Fݥ7!LZ3 s-XAHZ8x&TlG^zr7NT/:t>`FmLGe-x̷pL2V(1J-/3ժ&|@,+ŕZݒ#,jiiIkuv>p˜)GIE0*2#J.;kٿqANNS?qBR~-,zNzPv1*^&3S.9V X:P!͡|Zzcc~z0)/I4 %𳳢._RαKEm@F8$QEb)38BRʗȡГJ8)`F$^{X!K Wa[%.JW973/b#$[]*ăX~Ebk0j_{M e$ђp\2qba2 1821auC@;PޡL2%:&0+\qm^RwyÂ'Ò6%b6 ޶G2j=)(֘uMqeo ɫ 3е%AР]5BdK)oV8/x lR?QF8 xT FH\YU jf=KGaD*s%=xN^w̚U9, w63s!3R.u\AIgV`ljvmn1b^ar{fws1~7m6|q\wM*[mdBx ` oy@l>e-7X7^rxw2!ڻhu)[ɦ,/<yh=%m3bݱDĐeqOzGJVFʹpMr_߇S1Bls~ڞB1%$,Bвg&ܞɔ>'8`:da%{`yNPKK*ܴeY֊SZ(h& p/`$|H":@DeM\B`־s Xiכ\KPQ\rRzw5ĩd*vuU,5,TLASpGT LQ;NQ*g9&Go]Y'w [ۢl>ǽIMSʿu B6xUs!N\ҏߌܸ󤞰?Pe4c\p6&j) bو T'@-tQ4w}lZ}5EWHP:@wO)7<{0Y Vw4br쏟wVSŴ9~|i e箎".ہNVv/n)f$_BO($j N{l7HXVOWHҟ|#?5n}vyx߯~2=fvoj`izg({a)_LuѣYx\D$L踶wa4ob&ſQ"q3\|6~~fhx3+?5~g~,þ x0q<|/L&lF C%j3!~ Akhb}7;nR+paxRrG j#S*1l[al73%%QrhF\X=e@u Ѱ@XLF4􎵣`Q~<3Eߘ>ւx<|h[s|,@\A|ș->Tzv#Zh=a0U2 Tp yG3TcWkQsT6vP-,O;|sb#&nh ${%ݎѬlhd0:}~w:.^VrZWc׳_QOhD+ k1[\E='owl?<ٳgzI؉Ck8l{xޝ:Z\^Ӊqs{~}ccL&}>[nVw=GM Y=a>ߌbo[jl;~E~#6lޙqגPe{yPQq6VĘӊ;~LFlJ1{G@G:L#pkvEӘ?0Ӧbu ο WA`MXA#ǏoJ-MMM\:C]nTTy0ns2M mm֗Y씹HFxPC5TtaTMЈxٷDFVGzר)˾'NT%Гw*qw7C@+0s1CKK܀ڃkRBQ() Mȡ@70Ɵ7pm&lF? PaaQh]ʫm6-j4m?wsd@B$<* PTd]q;{--W# @w@802RzPk"4K Tcym(#H!).OZKEYv&ʼz$CХz>ØHy }MK <w6B TPMשMcccY +sAfxNPW\1uk{[/P1rt1)ӊ:!HDžԼ)n~O)Ws ̌"gz8_!27iEbg̽F̼x 2Hʚ ꦤ^d/l^1HlUr6%f45݇+T ٠r.$v bѣ)A% b /06f1qb.yN|N0a*G[A'LJa|bV[ `5 X:$󠈧FFeČSr5A1[ -ZX6*b=BAVH.MD\QD 4YskIeÿgd*Šb`yQYz^]2K*!ipvKݿ8E{ Є9ᵗ+PLer%ANUsRShhYl X5c*{kOp"1BRc$7%k9GBē=]ȝˇRun&&|7YDZ8Bi9lOF&JͼI 0PB@Ț-7@2w*6*[$B'@{(WrE].\LD &Vx|OMͩkŲEl}Q$i  ZJñ9(nS#-ۜHdǾrCAq8%e`vt[WtܼJ fm:鸋ӎ=M]$Wq'<# %ˍSlJ#s;FC%^FhM"hvRZj+~`cu:Ɖ!%T' &k#nFA4 qbKTӪӪϻIyw3Ӫ V݇QCl|\ys"ȰF)T`Z/lǩ#ylJE:;T8lsT{CbBeCg[D!:&CD$+a'f# Bg|k*SpaN%1 v<Ecaq/bʭqb%0.bk{6`qee|?+Cr)oWHe4w ]l-%öKaYj)!u.R01CwQC}7sNN2&^?&e pwflnx|vEO<5^oB* 5 wug}$~@YMzhwZ\Jw ʿJb41&Dmx @ɿS =QIB3J"[.4eŔrhwe,Cp 㓪".`sd )uB5!by\#ꉓYiˊe%=Ngw , ,JL`3X#e1Iu?m'}~&kG[?xsKDFǫQ|z6qDUv0u]HOA(?#\#X(ǬʽϞO75< nA]>Xl`Yf%^u4ܓ XJdpgn;6Vȥ"hy2k)l#!.ڶ7V9-T-u8D6(& vr@p+9Sډ<{}FC'Ĝ݅#;QOmTҫ\*Gĸ0nFn/OE08#ԫ bFZV-nw̓J"f3A \QDT dre & <'6Ż}*^ Ñ&o̮tBqH.p;h BtIE|5m.˺[d5`(t 7)E8.E¥xȱTKB# z ! r+Ez:ID-b1$X0 k%L+4JH* \b `B !?2a, ΢~Mcmt51qwƠ6N3&ikdyaZuãvXYmd9QR^zƝ/>K2A;=m|c'ѲS%)b1:,G<>rK#+l3&UR,) },L 2 R< {X ߈$)Uߘk$yGm];ڧ>;M`>O;b*.8K4ߛtI2 KzS~SImU5d4ok&6LΉqzBRf/l@/0A̷$Fa8x]kArcս,pQ'w K[Fe pսo2+tHZly;&1w}k#E4ٷc ot/Ko91ߨ ФͅݳI!A!8 mo}Y{ z9 W*3,?!"ZU*WRc,??T3#c%y|lhP|0/?f6Lg!f 1uLpS+\WS+[p}"Jkcjrʱr>(uBx=^+6:`ΩQ?|.l\?.e|4vt9\baX2zY`5,<ڝU#fN#xaI&)TC{&( mN\|>HIsޞd!v绪8ԁ yX2E͘"- #P%iϩ𶠫"Ȼi~Xڌ."XІm+K7䨯kkpNĹws< j- rЂ"R\P-axc's~~XZT2`VT=|r7,2Kt͋M\E j:t>Lo}Y.vKm \\.?+>kaظ#hRE MTy-/F tEqr7aV,*yCΘT^'h7!)vAЩ>vVa;n_4U51B=|LB7![{N`/g?es܁3Rlm+)-TVapQnpkp5s@!oaT_Z5 S,w<07!Cm@CV=K}$7gy%vԗyq'6m3RN@ cuA!^0xq&e+Z'nϕx=px,&1O73kPˠl#wsO`T h]F[ԍ|{s6 <RqY>ם36$)IV~#YKuNRG /$}#ouib3k5Bt%C\O2/ғ`WI\CC\df_/5iygmݡU`8 QWZGXE8쟗3NrGrrb .~s@8t|^mA5&$>5*uro\WoT1P A[007k`]YG+D?SZ#kaag^4Ts M%{7HduՀluŪ/dkbi FwBc ]Cׇt@^] Vhs !ˉ@皘ڀLʫܚGdL2yȹy3w9-r6rOX$+ &Y]D kXFhO IBr?mn&VZ wȽns26z_!׊00Ze#!SFMeZ0$i%E@nUs*z- h᛫Qt fZo{}iVT W9y OMTQQ?rA sxS\F*TS;mdN@[:YMpcM% o/<{ޯ™E)0"Y7wd:ݔ?'Y_8yQ3Y^_]4뻉^?:9謠zݗ//#=Q, ޗ`<`1 ӊ[V"bn[!0C!&aGL9_lTF"F[DI {QL4%~H 3 F4^i齱^u6 AٴIh^OKc_[wSJL#]v&i{|8 %)'AxL;GD{ ShG2lV\{&<DŽ2tFxdNs0@d q-3/!Z0jj8i,|E ҂ @L4D2- =$:`JGUDC!؈ ?>8͜5c7E9!߁j2(6*2 @HZ#<j =s)s'\'nz#uqnw=ulΐ8M!iuYC1.藅czBl&׼̫yuqR8Rr);RZ(yJ(hep7;½K=NB U_k쁅tf&\ݔ .<QpjQ$d_ͥ'g` 'g!'tuw]0X$AawpNZCWń˞Zq5(&=tZ>G5jPL$ɩp$e.<Z۝.5?{n'BKnw:^@n'1B+GMYjEp ~EcDa|x}3}3LV}@%Pho~@|8?hw0<SϦ뾩uRkNdžU-FzaS-1`NpS"vOa5#TٌsC0v  gZрh$VzA$:&@hޅj-䅝kDH_9Dz%%31J"( E'4(q5C'AU- Fzo}Q$#RHzu,uz! !.457dY Rm^Jm0d^ҹ&+,E˦鷦#+[C|0^kpIMb?]00t<"7}rӯ{ jKzaU˽Fª^YEU/aϙXP㜎МDDYXosX2?sQ{f㋛< +)Gʗ_IT^"qOı*:4q\0x! ŕ9dXν !,{̷<BnF,߈ݹ >@5v\IVxQ .*BT`@ tu1\JA[a6dmT{C\hPl`¤+9>J oX&l1Um>cs>pphP +-#$2RxD \jgI ai ֒1s)s8yڭy7PdRf0{3r*Odr|y藇ۇ_S syx.!0Y,A\8f2|܅?~xd|jUjr!鐡c+|FDRML4DPvjq3f:i: ա0IG PE5Iݵ8!JOySR0޸ĺM @>-C7sIq_?Ę7bi#~ qeX?l[SIC/%fUy:6{q#LBq n]IsoyqveVMܘ&9K-)ŸJ~뚈n9d `Pf~bFK7a4&XW\H\Z& a|1IKatм %]gՀ6=fHf~"SH\¼R[?-D'ʄ`f!+ w ӈV;v^|א?_BrV,L\I3M%ڿa穢JZT:)e훩Q8Σ,ĞÝmG,Ҷ#@ێ{kR*..W7ݦ$ iu'=n}5U% UIPW3f fX]V,&2C 1NY*9 C̤_=]y2^>9|-8ߧjs}7L xe!đ;T3ٍ:^i yh/beeF?%pQvn&9ߙ8X#su\V6M r"BH3q_9g*aX`u*PBx((e}@ ϶NBQJ#x %,lhiq +*SRX I$àx' ΓA|$ZlXX 4L'Ūd@*ni)P}9kvUk-u.Ollxt!vܝy}+>ݚ? ΄@]%tOKĄ TP6pZLKxiLpX8i5 cx0gqQ79h 4pDhO &#A5Q1jPWWRMAROU9(iN%ޙO.BJAOU#@3E$ܞNc_RðqRԱ !g->1W0oܭ_ck|k/S8u1cW/k_iwb2=rnr>.ws1,C0n&#W0'Gc(rVYgxSeοB1ᣘVx}хB,85;kK#?*03=AͲ/G 0=œ7hc:(M^% [CEO[?_IcbmvpEx~C_ܘp?hbWm_ 캘݅c;ki1c+,cީ7.q[5 -Ѵ|J5;lvVpHycF}ZGOht ΄PA`ǯ]uS".6[?N8|_(6lI7N[ o9x!jw-X̉(du/NUtaCHΈ#c Zrټ3Ҙ26SIQ/5ɳ';,H+Y.Oⳏ bxcc[2o7 X%VkcMz@}&V?sNzEu`+ Zt^,fPwDp"qU M?c^\Z<ͪEۺd5y)^|^_/oLrv3{Z]:gtf#x@D3!/mՍ7t B$m  VQMy/E~8y-ud*`5|.7tIAI1Dv$ ˯ì$7"{$H+t( NS%uR*uJ9+W-:+quZ`U75y֑FBךPga,{9CI !,, n5 *"Je;B[Nݕ璘M Rk+ҺGkl~uaovk)v} 0{07 XgGo,@#|Z?Dᪿw.r9peSRTbUyV %9BIjEEJ#h9hU_|Ӂr@g2T6j8' G򨨕iV8MmStV {jkem^RŻ{\XE\33_(!/o'̖߯.'BOr\R;eY ]$E6`‚|1zMb+ F#IjχB qbH (\$f4{ uB.~.Nf=)_@=8-"'^:]kQ@p8-1;?{9 *gŰ\]Mh ;50B{@j~*p|- \Ԁ''}vV/rxZFq7׹78,sze~ofsu젎Io$SѾ[&Ohu=QE̛.'"ϥP#;( LFŷ1&TC6ڒ9൅F}5>j۩TI*Q4M$d򸗍X?eԭ2zenEKTc!BQrI; Hd(I}inm62-:f9WkAAїk $jHs;Gj?;!Ԣ:b!WhPy&O3!̕kSI=M줮ƛZ70.; ?os`!'vy, RҗzTLhdB%*RR@!SSM C^Qj@ԐPڀęA ֎K'AV8jQv Ӊi6p>4ؓ@]lLR8M4H6|CT1"0:Fl%m H w cFZq4X??Ɲ\{1^9# <%/,`L.24xoQ;銒C!.r1O=8U j ӝ;%pKm E7vv;k|UAZYc}S*Wv9Ti 0*&A.κR-^y,Jk*8p˘EkZRÆp M ^V4(Uhw>V{2h̓]φlN"щRcu!:k$ +4Wm煡 ìv.$GvJnROAқ 1YF`88ȃfioXbUԻmk,Y< Ze67?n]$(]n\ H)#pYwŨ+6}t „lx?f77Ub'0 c>3QRD0̗&0E`JSabVܿzrFb”*:6 4˒@Z'VG7MpmvAXmjQPХ xfר:WĔ+t€E-X W((רr̀ Sv*sLл~V#5URaxL)3¸R-bp@b/ BM;XXo{+< ;4`: + OANe#,:G tj_P+PFt5'lr!Bw,KmGt 8DŽpm 8$<aD7yfw/&)BcQ)m)W)`^>~'~H6Iy7 =\,?צK>1c O<Bx M{$;W'H$ rr;xhwD|5zp6/yE)DŽx}Ǫ=+U?> {i }y/^>x MlzFtzUn9+o|a/0Mim7-'=|p;15'CX~kl^I 6YVjoKW0@[%ApĀ:Ri\k~ʔE4W2O_2 ,<8SkN|ߥWX޺]d>/W`|knbuD}#Â+^aTҭZHɈL$4}Zt/g.}ӵrvS*UrJTFH`jJxs)H , F#DIB.DDaZ7( % %)PUVb*A.eVh(9yD%ijAu?G;aas嵊>˪z}^=` T󏏳~Bh G!o~x [ZKŲWyvCs(.|O1/&aX>*70@' #Ibrc(:0R!%|d1-aݯq;+ni>ʌ*t| q49<؏?&OoӤo" X3F5ac{c PbZõͰd"H!9Ibߛ5N R]+y:22a6^=vX>9_?leVDOvp? iDLPj,g:N j'M&FTcpzwzYfhcm\7".߽fvzv ZO<0{qkG?"Qc^xʏ!D%g> EW"B\cO~8uu<R|>W,<[VXP,a(ZYfC<爏n[%R5!9(r) "ԣI?Պf7"xe4M{7 9t$Mi7:S#8JjiF0#VO~4d#jZCL:5ĤF22dQTѡԞÑyR굎Br^R f;+x D]1tglX$"5:̂L}jVٔ҂ m94ࡁXԇ ֘^bசvv5o73}A HKIE 0ʖ"8%`^`'; J#5 TF+C -A8ִmj>X*gTnQnգtҗ|859$q$}vc@M-E<1ߠuW5.DwmhSxL>ܛg:i:M%m5Hrڴ~,SdA)ѵN4xv f)WJW;p7PT4gk `,(/ڗ *- ʄy(NTm=sb ``Ǽn;?;v2;d\vlPl7`0W!3$r(>jf6XjNW8X3FNLCڕsihhd:4c-Hɉ5W&eN=FߊnЫeǷ"[`~[ѫ;DkkWccd!m@8Rmh vhXS S*it4T8$jDrcrmrA5"/Q*7ëBBH5N2R"wӁvc/%)3H7E1;U bz]`Mo^b Sg~l+~{37Ӫg6.ǘ0Xڲs׎7z^q #+'5#!/\DdReWKڍЊN>v Dtb߱v;ܵ)g{n[E4JDoW -щ}6pSQIim[nMH (T?ݪMVb1wlyFBޟvhvkBB^,S WE:MT"&R%s,sd;JpUh@V-{VCwA n}!nqrv0,3$IwsEQ;=u~rKf2%Zbqo<;{k~|bָ{Zs+&zNUawJU}B;'_0l'q3Eɺ.t!?o %zGSb[#.I_n:`OlV>Q녂^@P4> o>_,)|1tjUk8VUN#FvWCcl@DpCLAM8IXNy/]LBZi j`DzA`?ScI_ԦSxß%|?Twt|=U گt ]OUM^Ԧ;P[7G ӭ<Brm: 8{y -Pk*/4)%~abY-jRSHˊ GJ28m eo|NN3t[U iZva˒"#UĿ7.y~p<.=[WW'^}oĂv]9Q0xO %\: g'j<`/< b<-PJaqR_}wՠO/ I 43Y\+0$2J3$l2 lu& JR॥qbN8}&8IBiF@iaZ9:rK24׎+g Ɍ2K8@88|&8i9jm Ҏ)2NjĵƁ9ҊyJs9XuXb"px*oa+x@nv'}0j]JDA KBj3:*X>U07.1i!"ui(+`i<և(.۩uA,j=<FT#&+F0*"H@hF2%m@i$LPc`ə)6Tשգs(~a 'Pһ=ܦ>xKD৅zN_XGo9 oyZ9t8W$|]zOS!p_ 2 #L$1XF'U=xOPxIAhĊJ2$NI1c 1W423&6N5 )͈+Rc]Lr >(Iʢڼ*(,9csd2"$|R9s2M3YR9X>^{;@+6ˏLo?%'~Ӧ6v{J"}Ɣx- nUZ {Anayl7aU9폰4N 4u9(4/+!,@6W|͘ f3å$ ۦ%۫:* Z?\]iU5:::+JmgX!)@R(]g, leXZbL…Z+m@W2o) ی:,F7v7K\m)&C0CbRkE(ZTk(2'6ﺒc" TNe֚n jB-]*[.pkVumP_y{P5]ڻMzA+aMA ?lBCnQ #w^^2=m'Н_ &mНXf.n7v[:|hf 󑻌"xPV5jְXCNoIZ@Ko@WGζa5,n\p1rKfyu["W޷wwuBP5|+ )'JQ:ʍ'Ew\Aiy0dcfXVxMv*V&(YXA%alXr(Ce]< },2z QB"hK֮L#k kߣe4lu@0@k Fìy`Qҫ23il>7η WC$$\4an[GhxxJSMnU޻&\MSpJjM78Z=޼m XKn-V>w)^iqUG̯?yx`3o~U70jaQJ'pPjkYY}fשּׁzp7xV|X˳3{Y oc~hEk>@A6ֺ p/rG* 8X样魂7S%-kN:|Z4Avw֪(uv{6q(~C[(e{@(*aM_eY:Koiٟ,ꛋ_/o{6fp3~9L{7.f 1!tcNSDj:r5Uz~ݣx)؝!EWǀc;,߿@S^51t8M~>ڃC^!I2wԜuIA pI+A8mJŽF(~fx2\PGB2.8e9-1J`9TQxH)bTF9j]p>S+Sus~M®Hw ™!e¢Y2-LaI 8GɝqikL)1vݹA`P(8k1cK;A7)X cB:mdb,Xeh $JO孻-+1xy㑇d26eJCd !xLGkT@9[G6=4Ăi]Qҫl.a?z3*@E(|!=NAOtʊbZf-+{Wo|YD·ozg():gC7)>i2=O]Z,82z8 M+aI`ALTm6"ۧ1m5 7~. RvSiqŗp|x8~V3.7)7qp*2NS-XaJL"8g85) gʒӫIo >4!5xFmxw"Vx*(4|KxGr~FDV >`,/o S~~KzV8sg@^!Ec__;vPw95Nf7@k@%P$c%KAF)mfoES9b~0e34cݷHHҳe3!؀D&M.zKdQ EUcU%5$Z3VW/n^3ţtw`ގx =y Gl!3ҟ< >@?'xw}SbzrR3MD1sHhc'ك1eƪNڎ8_pR989kmiq/4Quz!wݓ< GIv fՊ9K@3(2 $1M ʔBՒ,!]A}5I0qP.ǜ-1R@K `- SJa>o#y6KVL:1S:%)׺{ISd&mZiY AQQ؜Fo%%ɬZ|2I#: r`M's#P;fC l1 ;U *+0(J`!U -FhQ~r#sLˉ@Z\WFm 5xc${&,BE$^>M| =؉ws\"jY7$]3;SKPĂ3;>mُM3{kijk:Rј,(l$SOM&WZRD5˛꧉3:{`LJu7UuoNa)e;^/3Ḿf,09{/nk =ySr໓%7Nfr9\:#[׆6MрS`S@ZA )mR M%=6&45J?ʀ܄k0T*+qjҮd %\a/Ys kTJ$^+^{jIP\.f٨G5˕'1#B:'Vu$%l.&Kv:%}~jn * ʕJl%/JlE)*몎*5:큣̡& P:G׎d)eTD ,!9scxz6*IrEOv]cm~N1W@Jղ<2'js$rK^;Sm d`Ⱥ٦"!rE9ۇ4uPL;g\L*x6~ ڸʺ89_egF姹T @hC:m'fsQ*m'+brC92Avx.6>rGCY'Tlbɯ%OE.UMmty8x;bӋ$qge\./8l/פ}ȶ5Gr:9˭Kj(^qh2]ezĄ t%E#LA(uN&TKuږ>##^$^I^ڶ~[h?+ORy]}U{wUe9t2[x'ޗO)K{6{q{fOLf;v+D3FaVwν^_=w>c{I}i>%]O|Ecv?_r"K{S鞻ew\b? sqv'''^2 uOgK)+ !a$[/AYźW+Ig2ƐrE+M吩VtyP6^Qh0@kJJM}yhۓ^o:R=oll$(@;o6'GiiStE_|Mlk>?h2fƚYE$4b;n6rgco~I?#qRTN( ^rU g4tfd%VG%+\u[ZZlk~]T/MIL+ͯ+NDln'oWw:"cMw')88μXT_}KѠdV(c g ;?[TEY[4PW2~_ٔevf_>zH}J]a0axM9}w'E;n B<`MŜkR:;Bq`1Dmj[bgj{㸑_,[aq]\ndQXI.bdf%g$9ck<.>UV%+Zk glOf) ?S.b g ~dP`' k{G<CӡcHf ̈VA׎o:d$턝!ˈb  G?N5AUfE +X^ir?| 72Ue3ϝ;Slܙhxa9j9Ht/^]}ƭ$vfaЄ%Y@dC3Y~u ]TM+:S1uzu\= u 5kח,GfXb{HLvw, *1>=GroEŋ{ \^]Hhíc:g;Ixmm 7q z}EBHLYѼ@VZnwdڭA>w/bB h h3@t/W!K$!`hY]&qKW%q W6* 8ԱgA ٓx5WE&D8@`LoBm_j{kP^+M7/!mn.3IGqx Fb9fi"Ưl=9A"+Eh»@M7 K/ݥ 䊓VfX+37 f$ t,Hv K2^6hiU1jτ]Zr*j2y/آ>^8W-Uy@c|~cѿl>Og>$=|G%[muOJ3 ұr V\ cNsVG/gkf3)qX펼J'e 232̓W މ%c$hRfN!Y4kXv ΅' 3p$$Ld!XHV8"gΗn\ :FʴL&遠pIwP zCrI]= 0\r QԳm)T} Hp.Nl%}MkCNaJŃVf`vIj2]=Up OŁvkX4( ‚>ù}|l ^*@Tvd1u/.($hek!^ɍny\- fNl$/?˛غ>-<yl>5dV: AƬlFp͙N"ɐZk-8D!r &⃝&+o= ^:edKfB1~ i#eBV?[DDB*qV*2V,JE;>& `Ś<,_D⬏DXt(hsN0bgw |2ьЋ#D^38{ثST+4kM9gC Z|' HM—fCd7/kVɃD niS-mhQrbٽI;mR= I!(!pP!F-Ē.)6YL 5 8;I9*P1 @Os$ZY Vl hRRV3hX)d\`ǚzD)qmSd,6_ ?^vbhq?l>4+>~_>~_}x^Det|f8&JD! 04bƋ19|ӱ5Wu3S_b6\uۤbvR6rӽ懴2N~t@4Ú RbUj<ݦ92 30oZ\i"sBH+&( "E! $93ZV ~kyk^12?&apĽQ9> %sBR*+5LYG icɤy\"NPr 49=њ ObF\@Ub$DVgѼmb6r"$S9K0,2{0J\Li(^kH*'ubGG D2NsUI_Y?FdBt{%7b̖p/n6gm/pc/ۋJ7x/5}ծ-p8ߚfܝ.B>`ML5rҢF0MJ<(mnc,* Fim6&|)InWӿ~}1EyeH^}SeDo/hv =el߃&I@N8!s( !NRe޻~li؁q&@_M#|r(B'iύkKx%}'\nqg` .Ʀw8ɉۄ Mӭxt|(*$;[T,EwETN &Ո#NkAX8o: lwj@yyӹ4^ͼӃ>Xq Se6B̘S.I4LB.C[-C`28)dnCKXR^]&P~p}e[0B4LlTJ*#P.Z:U5pR7ϓnE& KIQf|(Zr]]rMrud߿{[TB75qZ +BQQ+b)$*Čln/tvsc)h8g3fİq p#iII1LYQ3!d'-jGReRĜ_Zsn8r^[ij"jpIȪ&AI4(@8R*S=j JMT[aK*3[d+-׺/i}7 vo.n?X?_lnYxܚC9wۋs0q#*rpc<-]ߑc.Zm,4]x{;o_}(wA ɀ,# jqKΡ6Pgpip^V4,C|' x\љzup4dø2>oYTw7g<8gڥZII6WV^ys\]$m6_ڼ _i!鋋k5Ƚspɯɵn2߽z ۗ[VS߱,uyxNﻏ%zs6ZVHmT/ϟsY5~g$ٽtN)!&XMx$!/\DȔM8WKݴT jDg;Z /io2ѱm y":E@ǹl/uB6љv(BcN±.m y"#Sՠhj꺣k0T_]* pF+5 /W5: # s%+k%[KvV".Uqr'χ*eemEwqFDZ8]@,m>Bwy*i&=3y\si9lC5?q{nmj+xDa6`)ZeexC..X٦8mX՗lu:vUVQ +f頋Y^\1S1kG`)ŬqHŬ-Ӻ?X`zͱnTTd֓:,Gb%lQPgsٻg0%h,mқä\°Lϐ<#p2DΜ딝AteITLz2ٵl2S]ҧ#ZMZ.N8nX HuǠtQ TrC{re$"z: ڰYRc.ڥBLŠ&:-Ӫe^k{sH )2տo{z_PMŠFtE*yUVVV֗|36˥5{x($MnO45NcfiDȚYK)%4y&0RfI1"(&Ky6L2ݡۛj Ya s)!:@+Cc09NQ[LJgZvIsQ; zULCOLaK@@@Va )  1b4R$ >Q'2Ɔf[=үa61>c(bxPn6l'*ʠf;:UkmO-ѴbɍWk jFoDٔRtn p'ʠtwۢNXvˆ`Fk ;nwn.x21ݶӇhݚwhwBqoS6ނ!;nr7>'jFyifF߈(rHTX. /gfu̫JE>FE[mfs)Ӿ"ӮV>9#@5s<֟j.;/;dvFJjb.`zH{#Աz @6зd:`vIIY@h Z%ϴtD/t2Iܟj!ʚ)UdڻȐx 4o <{)Nn_=Pmպ:J[&m  .,""H2:F_ͳL{HS-j֓j 2BDg(V“Q Y"QzRHZ;j"`OA &iD4Y%1+ЀdD؃A hQJ E8pw^MO.Y:{i؅"THÏ -+ǧ'|4jSG+ ʖ)&{3U*7WoM__bwqMArӋŽn:on!Qyy9!!klAT`E!J=ͺ#˼AEZUC 6B11 M.l.>KSf3'2nh>oջe͜NlcWA"rB S%A&C(76SV15ϟ+gMҎhV44ek@bvܞ#XƳMk-G3Prc,K5oh BT Y5#1.JٸW&B0ȶy9gfIb-S*yIE4He.'Q#(Y41ܟjKʾTsLAyz`'oE(>ELtA0 Σ!䴯_mrv0Sv!V[+a#kYQ4]&UG<5ޱD[XI}c@>f5RJr8 ibU64ѲÕO\mDrȣ7Q[M68͙g.9[Ok :q͍5~:؄Ѥ8;nbMOӟ5.9&]үS {bB$Dxar{S~7 t5{V2[gV߱E~xR&ٗ|*] Zkr=??hҗUpLI?=*ۃ!O_CeՅPYUB-FfGkl]BWF] VwOl]sw޲0u,mqp60mRV ܍fcR2r˿}ќsA[`$d!'̪s"j\&iю6+I‹]{mڰÓGq~J9_8!̉|]~[o̊?TJsqtyr Nώ88g ׉8 7,hi }\g/ܴ uV>YѺHD&q,c  ",uf=8jK/\;r4}8Nw' 8y/?'Ï'sgbaJZϥ]Ou#t* ݎQ RI 찟J K%7 hJ )vL. U?LvIqf J3l zN5F8J:/龑V7~Z=.W(g_Fk [\ƜkQ892߅?=֌DX;ޡ}NMz87.t07Ϟ0-ܼZIdT8I!Yi ])\g2+='ZgSwS+cz1Ysw{(ίN6Avtn6u'Ĝ I[`n|wr~i~4;;2gi7%7|vӋ&\ݧ6\}Q8rQ.O?|tYz.xxkʒKgK9w:;ҟ|8R|TN oB ʳ4 Y(hּ㦴k[_5+6;Yh=5?tU)7o [w *(u)j]rRZ>ꏷSzC:}uϮ\dvSU_fWPE_fu_g+EiM*@8 {sElDlsVR ;+jO.kk|Eϋռ]l֠E^vNe[䙁2rcA˨.5,\2z` /eP"G<$sY0FDm`f2{hk9 Ks?q9P'ˇryt-TN~ӓxX^75o>X!P[o/ps%@9zW0'zaB57_3&ZO?]N'WMdۿq_Jԝ_5|#i@`?O>&&*˓0z<˅#?OAŽɧc]5bзyne}$d+8‡m >.Y} V9sfVזwy( ϒd ,ta]o?hbݿ:͋Naڦ2(G/ŁbgW.VuƔKAZ8&9 vQ k߶.vS]4K诓Oi 7\Ct[[rJ`?e~K7HBk˨HB 6?yK+a./BPXR\Vbtǒ~{_N $:s~_̭^L_Wˇkݙ;?(oDj9O]!e}ڐE<O~{㻧ʲ x26 *XY2lP:@QyS/8$tb228>2.[D^& re%o#bٳ!S} n{u/a7*qY۷ZswJ}iVl+JP%7s@F'Z\|2i:e%-,Mӱ[DZS,42HOXP:'ZD֌Wj.ﱖmWZ|yuy/ ]izsðg#S^g "qc6 q(}Pθh*ţLF:.&m㣦s[^|Tl`FGM˱Q y) ܒf(BtT+a~٤KьbDK H4s£k7D9i6Z̈́ ??~\?Fzx |O~/;\T:mCfsuz|Pq#U W0sC%mR*P\iəP):1嫑{?Tθ)|aUZPzaoJ-[W[U;c.Tg֟MW3ajV&W$ƙ';jm Wz"$uE7[uP&ff.}ti::󀖮#IyYt]$y/ R9" pvgv 5!d/#b9.SVhXt5֧x]lŔEUpJAWY* VF֋fik?M *V**J*e@-F0\245T;' ~v>?BPi𣠎s:~'d:G#qjQ1X ?йBDԱY .>֎ |P٢($u{쥂6NǷ*j! #MshԷRͯ (X GVҧݗZR_#!A8gEռyh-s~Nq C20)LaqBZ>}ğOdW}"t\Rs(k)Y-N|i)+vv1O[ Oxy|AϟW[wqE8tJ巊߇)oV]sw:D<`X́ ~:yܼ~ eykG&ZR@Gfr#UJgC/2nqkY+օ*(K0od8/}(u-(mn`KE ږXW,Z)ڶSbJ6üWvLx11Ąiƒ{BJ)iKm0mk[`lQ\]k1riEZ*#Ћ"RJŶ>Ԛ/# yGT֥q~*+~[%EUf3Z (Wpyj8ҤTLV̹tT%{doBB:0cRR LRAŤ2_\qzey6}+hWD 39)~J>Ff<[Pn)e܈2ͣVb7LK=,ӹ-' e,+9i$|9. ewj'gL_ P&[Wt8df@љch%,\\ѶTmk 4Q\Ɖ5ZaڧuS!Ӯ(` bE/l*/.٤Ge\L4ءI̋KH0qyd2. ZX\޺\!\.pthX\Rdg6Ty<|%S'Eڼ^**@đ<ŨLĎAiD)oě hZ,!=8v"磌ZuZCsLP8tblhWSe7J*/9; F47sƖO{ _]3vU߄C`MY6$F3v#FASg`eK|_2(,'pΜQ\d;J(#En߭y1U(Thk)\U-F;ȶm]Yo[ 5h*P5Cb]_]EsfTᾖ>VKN{SbB(sS ^Zb?{*$د޷>M4rt, e~D9V[Kď{{Iz> r??pȏG9.dw`ȾB%CS|\}*$EK")<ICYWMSzMnvm"5@vKqSzN˵潟aU_ IB!,͋o ǵ?bo#XxE< ,#^ SqRTTCL/M>!MYf3Q~cI'j kz}y[@tI;yzr/#>8ţ7I_?7~DZcY`0tp( %fQh:jsTlIօFdҴ~ML%5|yJ*Hvϙ\>'Vn@p 07qd mO-5*4IRp/5(G bO]\ e[j~H-~^`$`[3F^mo©_ miM1EXUӏ7qÚ]n+zC: -j2-t|s-0ކjq5|J<zbݠBVzˆ}ͪYy!:(~>n:#m bBg7R^BeD-GS~y݌Evjv]%m'nۅDfCwL9gPۜoZ:ql=_>?|uM6J,\>Û_77O ' QoW>Rseս[%ԬݵO%G~rix.dOK}/hkQzQ KrƀrڏxW& /ٶ{.荅Fť"ŧ/FE].BYk{g&&/V8).yL $մԗ" ]ݎ8,MdQdi5훧q m5tñ51Դ'yz^Wӹ-gϫ4?a4sH==Ҽ봐^?r!=8drLZImWY ӄ$9Fu2j Q;/i/ b1SӜuNC߯S^nEI<pahwvəN)@"8i~ $^N}D Jqٸ+xS:38%]C⳻d hK %Un|]0OϮvd50+U^n=M0jG4 ^Dw'{:e9v1U})*W!()|rvFх RE8OQ&<|(Y)?BZQ-TYv^|M]Rǿ(E]ۇ&҇f t܌ۮ7v{/!mvtc͜R:^fs~c[2πbpXf8Vk~wD#".ǒ0^vL]x/wצ5כ?5Ca}UvGmOnZmbbErcWN'nH̰$T\u )pZT%[֊qUaLhME!6j uP@bFVμ 3be vvu0T\RTT4E]9ӶMS*\4hqubFˮS5BHMCTo ?!M>CJM5jfRZ*Z-Fs\s5X[T[OjpGgЯS} yTh Aݥ='!^}#I^Hp!_atCQ3ncO"jMpnk F.T^Wa* A?tF'aL߆0Uh;m 3F~;M `鐟C4sv7oagQ"?7bdB~xTCGDtQ] ۙcGg&^{R>ި4_DcJc']fAgX^7Qs pT`)E^jJ&˧X`oj~^)oàvJZ=ݎqophu s{lf U.WH !R8^蔙(܇ū#bՌH"6!/ .Yk؍"ͪL嚢ᖋ,'RAZk4ƹݿɓHREFW u:j*ʔ.ٕmo \|huh4AjE/.TQyߖNV!6ZsTn^UJ7 B9cdIL6>#jEc׊nqameyAFL[JMb3Ն'4=w2&vOXo\y̭~ٸW_Ўf :3љB0u|QV@gיqN:3f:Ȟrˏ\Ni;52!GVV0^#ef+6Rz،=߽:KYIU2J!iF k@,m %m[Ji][o#7+^vsix ]8@ 2IeF_:%${=8[ܒ[[ݤ('@flUE.3S5C52<f,Q$Hc9CF"H%4%`nEP aHTԀsRwt 1E8@6C0(>=AF?,@E C!Le! >y1"5Vc +"b< `2eުAuH wt>LHvaO 9XMEuvIVGD 2Q=zA,2.1CYa -n<ӖCL$Q!' wF`Sl{v2aIBgrI:2Gz6IhEFKIBu)o$ZIBuUb'$ C$8Jȴ%O09e68t ̦CUn[5]iL*TKBcd2HҎz>Hw?zAR <^ԁXCaOaK {iyœRv+h8ݤ?AHHSnX>tpάW8sƽqbިP< (n¾7Qt;܂^oT8++3I-k=x*'}6{knn<K 16ؽ8 oU8<˱iL%Ց*sEϳ`-\[#/zXN\hl+\+i!]UOe0+0 >?ww `y^t8Ժ NPjWu̠ׄ{]]Œ#mVwO Ȕ9<'zȀ%A)XwVdmSA= x*Ey"Uݭ`hPWSQ.[ӝ=cKv .R)0𖜔5wwu>8mnGtV@eه?Ώjw_'@iŃk,*.m0*=~/\ Хv$k8Z[wr`{%CJ9`sd۞VlDKRڶ]T prd'GC,G":pԘxt4JEsӝjxȄ$Dךl粑\V!dd"$OPQ%KUer1 KJ3-FyN!2RE.KL*V* RÕ1)A9R9q<:K0 & 1JsT (O0#j}V腇ayGŪUib1^dGP"F+aS~4HZS6}ZLDE1<+>e|g:_ "blGE/^>/k~zL`VPW 3Vv[BEL(&bx:^5L0!x %~o)e&V5pJk%&/xHSԨ~kLn?aoՠeswMqu^8'ua f  B5=7VǹG?~1q=2!BLf-+dxP=Xmoq\;c2d00,KK K$Uc47֯~gfwww&O,'OZ<{\LMӵy[y3DPHX)iUaP5Kb.Q&eT sr|<5R é-u6U"#Sl%}|?5ϯy4Ԡ>BJQ=ڥ`IeK)vRJj=MJ384NNHe1s]m) \"")TkKѥzkhUPfںER:LR4_q6NjvVXb0v%DA!j-獷1V]&,Hvv-KׂraQ:q仍cZs6ԑOueJүӻWdwH]L <\pg"F^EEC uB G맶`")At9;BGO\G L<t^qw!ʻ HP?t_Aj,`#VԲ G V@@~Ϳy~!NjD^z d2qxs.f~3sEP`^;LpwtZ"+̽V?j2Π8(@^P};^sM\^,=$`XlmA iuflڛRCL!{'O~8fHSQژk;e k.v|r@4WO:BJS>iD%]pKvM`YQ2l聣N9Iݬ P8e$o^C(G9 ):SNi؏ PKfOfK8 O>Z|&w%waj_upkn\%ڦ4Mj8t2nJDoNg˕EEǿ,+ .v 6ɓÂت6FJGvu"zK4ť2]4 36eK?-ASoL乖Sʚ!껪5 Lg/aftsݭ 2 nSZiB.j?_@f?R*}`%͗#w&OĘϽ 8+Hkb,cl#~*^>fqG͖~MA4]IV\)Ǐ0LqckB;گr?Fu#gFV}1Mc\mD*NĢJәi_f>]BRʙAߥ[`veqFf\fb`T*0qJLRz `GjVPL â}Ej: @,[fiȇk5D2F '6"ADPޯ\Au52?=L&#a%۽o b(IIX~n}`<5ZS9/ƓC8A La:Ī3R kDE)4h?蚽U#;Y6ftХErxI˓HJ -cfBaUĒK` PT srE<5R-:RYSdbzWpJCx=7몟oB73OԨʖiaܮhÂqښyߡb/prb Cs_m_V{w(T͙rxsMtRz.բL˓RY$猔RIaGT ҋR,--E,SJQ=ڡZ!QI%J)JJ)2ZI0t!GTrNJ/RJ-->Cs\RՏ ?V< jA!Dc[^? [iv`vѧB9 q˃xܐh}B1,ޔ!LXտj{=%~%}"?5(DJBur>ClBx,'> T?\uj 'Pj5a`wɹD ֢[S<_)/sNI$bhZP1dc)g{tx )0R| +]9~X>½k߶4P|Iзzhm5`7O\kO,VW`E$aӆp[6G0W!@ j[_`#:[A,T!{BT9[v8l%ER͠6A@͈&,jm)U@!gvF\27`$ؑ (⸊ywvI0t1zg;-'Rh[l`ۑ62nSSsOcD$qh*RHeTDHEOX*i"xo%œbf$e+8Ti81KB'FVBJ\cHh(E̓&՘!5p6RP5.M̤Lֿ.nhkֿ^.n@wKAK9dZ :տ. R,9Uտ^V8ʢ5_?L1%|݂} $#d,!GT Y*ݕ2ܲ/J.R.,CPG㤜жA‚JkDb*8#I̢\qC*49\#,%S-~0<:[k⤕?-Lݲ='@Wmø%cPdv7= R{+hvc MXNg{Wi!9,kX0wepû-KNI0\O !p$=>kK-fu4ͺ\ʥռZM)g_W몣]mo6+ 9K񥌛 [ (tⱝv{v'm[f-QnILFXEu4 Qk? =qkQ]L,d::,NYIV}d.ayR2]F%yOkAͫKÀЬc!" У|Uǣg2xt>S >QɊ2=qj7w[iוήXYRoA$4^]`(HWz*t˭X궵g Fχ픝ϊjvbYZ{1dЪVTWk&]]G3+Rnf9;1{ >P٧dg3 @7 EכK5AgX C#IXHB?iZv3&@ 0EWKQ $e /9Rx(A[(B7 9`Uio5'0/O+Ѱ B/zܔ u/`mjf\ZmkEX|X R)WlKe.-}ѽ ٖ@Eׂx7'JrzBՀu't#/ߵդ*Lȼ-v*,'/~}As[ژlҾO`[q-bnx#^.ND!'d`JF?F!z0b0=` NgUo:\qa\^<"?%[F A4D; qvgJ %ՖBAGe%^ B J4h%@8\!-E%d`3[6 koeux%E+ %HPe X-2 R14(YQʝO<ZM⑊SD=ዙWuu1uI- t':g9{'ڐ&^bdn/ X|->;JjMGpv*4.`oV=FV=Fctu}Ƣv-A.퇛tӃ#{ݻ'#Z/[=58|73$ z9hte`NA95!^IyymQm,NUX/"1ִq-­T3-JJ_ LqB-%&DҾTSIR`:ϑ6q(;ΪQ~fFRII {4&؈GAJnLUB{+|ICF^{% BBKG@QVu ﮨVVR!rUeєPɵT#b4P ;S,^I+^a>r(ymx;*_r!tU-F&OQ>8Mygٻ$$n{wTL9Qhއm/K뀕DT6lԀra$NP;&F -0I.w%u9Dgjt]-rT inIY-ҚH!3e+ "FG|ugW-):mNF - Z (" ZvlR;_B(˰HG7?4]_ :ݻx{ׅ;SWgzIN|4?a+7"9LnS\_6rdbJoiIV$h$f~8y^[ŜiZ~uQCm1<,J>od6gY!0˦u̿KZ;2 UaMQpۻrĚ(yzݟ—79@h9w[/xSBOaĮ(')M =V}Lȼ(㙟38\Q{8Q"$Y->I#HY>=Wɩ.+8N@jIZVU=éB6%BCtbh8TP ALr**,,avSz L;+'ۺJg;/Lg:v$Sݽ޽W5˟FqsԈċ]N+?jRUҜ9L(הKe]} +Nj₉|ܔu]i8WX wI0C̠jc,!^wZ~ º+ZLJ 4XJY&XN,榏q2EUouؽ^+LkxMe[t>vťTe.~Uzs=/1ϝ  yj))_U6b7.̢u:A؀'90|.uє84;V tg0|uK z`<۱˳>/ ۴n0LO8ly%wZ{>˼(v$6bHC%uQifB\p8m~"/r"W>\m g]3e[Iax+/{vQv):jiyGrũ][h9&x:jdM ڸ 4Ys'p:d xϐ@h~< B2^QWZ-fQ[qJ?黂y8/ɔWX ƪuujfo|w RS O5`糛JtI%ch QTBJ 1;Fʊ6k-OMqbq>ibI]L~jhj'@`ni횯Ek}F̪޴%MvVāfU}OzF{w-OSoQmPXWq rQE=Uo.lX̒0jk_"80OKSyf"l>95O WbFkk_+0HtQs#T: A"z[3ʸ`P%RhYZAebV;.:y&Y^&lJA5 o*bLHoeD2 K˭V&RL뺜WCt+4 z x*Q*Vz}jʈF$TR+4` 3`dF:nSp&nc"aH衣SC.re)( +`x^GE8H)T ]F ֔T{Pﮗrk=8oH=Ϲ͇ 3M0~@d+v.dWK#$l0 J㲯yҐ -=Q?cZ8ȕTI7kbG\"֧u5 \8Mgfwu\8?5r)6E!XBIz{ݵ4,QR@C׾;P<CYV :d(A md%FVe grd,ijVu5^֟l1sw/r/oE5~zvϵj@Z-`o:! J{q*|*^\։0 mzGfe^]mFXm$_1Cn+@>= 3DՒbVQEQT"tt`-u=n+3bu2#G+3˕F"RYMABB W}xA:+7Q 2jkM!B/B~uYUXr2*yÕ8epXzD(*;Kqٷ 2JMf{1J*&$?n߉u9y7csq9yX-?6NpbRۛۯTs=@`F]w.m%jxu;Vaö$ppŬ} W)+UL[otEU\XyxWXZyd9E~H@ d E CҦ H58-xECeU5 CUv#M2f *C`徚b>YyNIH\E4ǩDJRO <1#'TĜx02EI_wIm_># +lg~ԊxDKǜPЗx5h 5BY"%Ets}د] 엏yj#]C:(u!ʦ ~`i:LBFkg<?X"VL!W4 2Ecc@b3“2HebIT%ZA^XoƊITwr0g0U&$RfqX<&X?^fH.TH $NH"D=>4<S 'CMG 9!_gea(g;u]@igeum֌N\h7$kSW}9¶-)q ~Ks`C` sCc3Cŀ#oddP?/.;OS: 3vDAX Hp]| :U nO$ڜlR@j} Zh캙%`L|eiǒ~NZ۞S(bN$87zח8-vĨ$1ܡ1)N*xv-9AP{+me>)7J{)~~`gl_n!^Ю-X]HБ-(B=]@ASi[lftǾN0XKCA㻣;dbdS1"ţEX9BK/,m9DW%~YX z2beLXHT B) rH,2&dqSeVo}Vq-L84Re;aî:˵~E7V\IoC>U*6Hy*-=SC(;QLUUd.ʣ 1q!psóg6l?$Y=Q=ZѦ=)6Wrk|(vç'O( Z$DGLlBds2 .b(d(BUI1.xtم?kov Yb0p{ڋ3>;(x9Nr칸;s^݂7V_ն'lܙdîd~/.>ܔ-^|>4?-|ןM8sQ]DS?ń=#ّGEgW?78_of)&B1 Ƿ' \ 8i ʭW y˽O% M,b=EIfHtׁHLL)tC|^ymoݑX5`myF)V$` ,<ʩrDZ%KkXr$\98$+dSYZX\SI,yfr"l'NFkrerGo?_?Zb/XM,W J>MRd9؟q3rclQɍ(7 rK;*='g3Сw|<`:OW5 }IytQ-{v7s%vsg?o|ƘQ?OX}Mnj0O'8Jaw1jA @y7)+pCf2-) 'kKtWޞf;'sgO]ˡ[Z]|*!ri\NudTpg֩۫lFG֌a^;86gw٘f(&ĮwGʾS|fIR@ω729H~wn^ZKI籵Jjt,0U5{W@Ie64%{~/R*;#ҦǴT EK)2 HJMPUQImVY!6'l b6s`"WH}((P<ݍ ׃wfR6.MQ߿}9=۪f=$f57&g>%Lbm,~W>xy=qkqW?{q/7C`ꅦŧ1c0пŋ<d+W}recacWr::X2,<1 կYs I,.ca:#_̜~0_a]L")V>(VIIn%1Sv[ ]9 mcGyzЮB+.)ɗʘcuŊ҆$W|4o)ƴ1]?"1}0>WF+Vq7y=WpUI{hD^2ldG+W'Zd0u'JhNX$ ވN_<ܖT4W: O'&-DeQ }esu2(D>(tLux%A6h^})P& /O MIy[tuw${vQ??-2n79y )9Er[*/?ܔ/`y ?!gLzDz_wLx,pdOvH1f2כ*ߜb)7uSASfoiW4cΐG#S&,&f)2Mm)gT!*jZ4S4!#1 XԲyi]+` X̲NcJ6 /euJ/W/}^Jtk:㥀wGVJD/+l|3I(ԮHjί^½Xz D` +vjO,gq߮s|ެ*z̍oC&`@wqIťE||-AJT(iGF\FQO Ъܺ:4)r_yOUn+N<{ Toç'ߣ?{F!E_|".;s$;;֝, zd]~d,dIfrzv0حVbX,wid"N1dvyR B_BW]σ(p*F t% ͸'"|DU7C@nZpx7q*e?lsbdsPJ>6daHFNKB**BeItHE+LTd"LJ4Cb|bsKJ L?b^{ vie_ /Q/K0DL %$1H)'Tf)XZ}1$/(=(]|q SRK-%K/<E8ZGރ/eF\3tFG3Yh`Av{zy{>`zVŖyCp2,σhXv ,OQFDtJJVSJ`8A:Mmǻz&IC9j;#b=]Y1>dǩ˅of%| د ?{Spvs}]zQ!-=[邋|W6vH9>xC_x¹j⵩s.YsZ;p|Asht纄Ol8 8ןvįC"yhUaG;t([T7Ϳ(5HK7.QfյVi$/CT#@s Ov;(ATg埳T~BPD\ .n(B7F|& [NN3+HJc-5%̱+~׶LK~fO5ZKuuKypi 5Gɫ2#ܤ.m#MϦh1f بv__Rmced֓؆T+l4UV]~HJP;2݀nY?rj1tFVMlC.)kv6v3 w/=^p >U" MM]4n:lѻՁ"ޭ`FBYcޭ|wBs-eS\v>{!Qt~G.b;ۻݪD϶)-9N#ڼR(_{k/y[Hk$-'\8nbl1HN6fҽyz,'z.uft j/swW{!}KxgR@?+M{+l+JpNWwR }˶R踹nNX]R1[E[& t84J70f߿]qG82ۿCIfhDrPʽCӡl-ƳaYړQ:]ࣉ\١A81ӤGL`Vl YT.e[meI 4LYHcQ0)En[PxÉF91Jh~BBEb7 s@D(4IjU^DPES'mQ*/U[]~ᥴ%F.B62 oVνJgOonm 7l+Q]邈JaZU-GQ==))3=U 0ά;~565 6R}\s`C`SuD=u8 ;-ӉQN}q,O 4 Kus.4R@ohHᲴrUO4ZV}(OԪNG3:EWqjk87MKko"L R?`H$su$@1J3@\)`%_H6W]d?Z^Nѕ2KK4Hu~_޿?gOw[ϯ _nWø.i9+C ۲O`~k~ >W:[,^2)܅07Gx/5ryv*Pg7_*md\y5>ڂe=(BC8Lyj3\kpJGGY#$mbr&l V1e?~ѧt?.o9|ա49ʶ(<*_;eS|HYZ~~OGс$zi)@T^$_*#t'X`(4W9(#Z]9AB^;%@~W'\"`1؏e-" PTb0 d2FaD@il_1wIJ|dJKj^K N6&_36/Џ|Gw 7h W>^H'z-6mpT @ ~ʚMa=hpM(z󁻱3LA7>КgjX}3 #'5ܧSƖ+RNZB]@K2J5oaYz=P9`GcKJR1gfhfj2W LU>pDU njS.0,_DAXw$B)zV3!ağ;rz2]|FmÞ?=x#j0 nCNV`Dt{KƯ ө}) Վx)v V/ݨK) ;O{.eKZ> 0߀RH2۴RP~VJ-e\KRF9cT7Q?+MҋR\1];犟J-i.U[%Z)~Vʹ#m@:aۗZQS_ RHОn_j%rz+D+g=~UVzJ}m8^RgT4a)@.PqBǑLJ06ĐDh$2tBvyRcP:L$!V\IH'@ R$QTC;nJH)97$: %G dp) ܪ=0, fB0ԌI͘ ET~wRK ~ LJhνvQ*ءnjl<7#wNK]Φ뮛` 2Uaeϐ6*lWoJ^]l'u/Yc-ԧvQRVMWv: 5]ć\RSw;W-tS̞[[6~ +ïkW"MH9Nl(nbsj ti|]|qxOZx:19 c-IK] 氫/(NEz\ݚy$ZSOQCg0M+tL]-IMxwjDl-Ԫ)FjT+jJwﺊcZEBAM)OMB }/]-dGWtruT?FJR":gS0G`>kQ^I nE;|a#,w4 ^S5*>}G&UE :CZTvk#~2s+spoQ.U[rNW'\1 .uuͬ^'{$P PCElq,`HPQ(+_TRX׹9flrqnK—|W ")sR炼=KWBի{Iw\ o7֢&.~lLFlf_ dw0հJ/}GAb:α.(?8ũMMˊt 35] ᄈp)sΚЭc8#K9|n=]wKN:XBYyӝF7*mb^u5USR7]6 PŐvJPA}A}Sѕ'p"orϹSH;>-ݾԮ;_}'gh@tw/֤'/l+gY-Ai{JG\*Py>@;aJ%T=DkQE'EKPDۯC&cmܥw#!y7GE`ZRk̷ίsք)wL4HF@H7(ld8ݟPy3}1^⟸|<7y h `ö+㻷 *MW=s=wB'SX΃\j7}ʓv[/o[/`^ٝzʇj~s}m3"pcg~0d TǻOUUS[)&Y/]yUom %|gf=?\U[7inЂwoixa?zV@UR{3JֹC6xE|$ӽ? )zx <.Hw"tp jEccWi6oTbH\ɷ6qbzӎ(w$ukT~ x8{(Wb[q7cͥ-zlwⓉnPOmͲ.5qz'W&ݑDf`f\&/IN-Ƒ C:[AXHT.+sբGU*po {RiF5{)hI\M &BB8%It HhPN,m-9pC#GE$͘oEl>dOܗ &R,:I&~řh rTX$HH vs$i9K;Gz_#dsK4k!d3n?밋UMsqB-> З[/{+L$҄w1gd#1"kdA0AHaŎVjx\AWW$ׅ}_5c ZqDž-8P`9-np-}(OiQ]:1̈Bat70FE-x%8"#xE0ߓy.5%jaG~>1ŘJo@L÷EV6 ok5a :%Ȥ&=~ FSf&]GʫBT/;1fxZ00SlgJ%datP Jh头hI@&1ؽؤ<r0Du_J Y2IT5t\PxU VqU o&F6:`50^+MU3 ]Fd=0_iÔ1Y8A.`Hq)ƍN-{N- DNmJ3\I697}>:x޲VNZ4.KQִzIlEaЂ%@ )b]KUhNd]A~ɺ/.?, bvUWک@(źghC[۽%&uy3!BA{.Kˏ汆KNBʝ}7GZw/{/k [^~3grFU3ǑG@A? ۓ Jwu8_bͳ וYJ :K5vCNo]|m[Fjڼ*:nȖpob{_˰ltr;Vin=xI\hB9#M\V^xƭqw$R N>GQ*j+huVrWBP! Sa csXZ0k>%'杀qfKr^ v,{õ: d4h' CċG" 3; *.}*=*rι 2;g K09Chkzt mO~q/pjK+0B{!]RF4rx }nt- 0F z)1jp3z4( :QAVT9wm9 rD{L+4 nE٪ϐڰ(š蓄vzvwA5'JKW#kKӖ[iKy?VnV]]jqgK.v Vb WhZY6Yg!_¡GrU/ۛ*&PW!t?a{\}v-4aڝcrL<}wQHtw?!sQ*JXӈ=o@"B&vb\'d!'n965A~#ŻM}ͻWFz>,M4Ǧ,糏8T$ [.16.B*Ѽ[>qCwBND;۔њ|礵4 xLį*y+nSuY&aYeuY CYM1$+kX7 u-@GJ`?Z4`b^IKE]9@sdhEx|X[׾Kη7߇̓g9FWͧ?u~ k0 ʫXZ^@ZB5zRdBvb׀ `4cVOȳ _{Hr/KC ̖* Ģ$5ֶjJi‘6\{*$xMh2mK2''™Q&Zf3(&5|eh.т%!Zw K-%i6Qзh`IYރ/ 2d +Gdg*Hz n8#7%U?<manay?_8}~tVi_x b?j7%|_φ1 -NCWӛ^̎x8JTtOЕqv+X+…O{~;VӞ@Z1$Dd*ǧ5T?Hi'A.R0z(3Op6 cUNPWw^93<9,SiWBkW\|i2tzx4m a_Z_iw]u~zt-$W+zZuqNOKGr9E^YЙuA꒓gӕv3p`Tױױ~Ub2>פd'IyzϐGRc4$s :K\6 R&8FMVJzYi:h}Iܸ#5/CcBxLƒ(` ~#TaDYb"& N)yJQJsFtSW oO=SzRK#uғZu`Yև+}800JvssT"B%݇pq`frիJl㹮>$_ZhI)Ai47gVRʫ=<+DVT?Dv5Br3]sfE|f) ^څ6 J +ʂ `062cb!Zgݢ9JEj]-A(Vh|z4= V|o+}n {iڧ;2zȽtHJT+$Dv 0T`EkBN`ɥ)'E]vMpd鱪Iڣ#%rtT`@@$Ғk5HK:f`v^sFh*4eEI;m3f!8eYXU`Ps.*2[E%ZV#쁷W0{7#c6rk d^V5zҺRsE.N[!y8[R r`p4g=SYȵӬ֬7Ԯ6<%԰v)[K&,2Ki%>`|Ʌ֊ \n4 mdGX{$<ҚjwrરWSHl !F6Sc"#@͵1Y#>PW])D[2Q6>h!Lam^cmL@]+=i #15hπޮkY]^hr ̻#G$nLs čq+Ն ,шurqD&tWFdpG7b^Q3'M8&LПJ}t0h0wE,D_2k"vQѯࢢ4 %SݸKĢ,96wE<}}G]j%~yAÐ8"/):ZEvGg"g.v7& h%)\@ӭLAGʏ"7,|+w g㘉AI^7x'N%㘝؃fٔGޭh:w tr&Ex[~{wBND;"ZwcZxAE^-0R5…w(ϺroJ$EuIZDž+Ytd:R7dy^A~!Qp9-iݿьMHf?unjMwU\W7gO?~Iyoww^_ݯeқ?p0 dϩ4ȆjE_LerC_h>j"JṐdw~3HOو˛mF|F6 ~R7[u6b[1$8`LhMkVzAn?QZT,CfK*UOګZ[vEiX]BҋA݌a7ّ]o9>:doҙCkK5X..u!4mrpz{'Y{;ruwgwnnOv?[.>ZOV߮mqw<Ͽէ?*~KO}݃6{DÆ,uxX1w_gHD,Y]mfD4SjwV6N/fe/8YT˰*WįʂUP*N[eҁ"mʙ FH1"n|9sE!Lqkd6]t?ZT8*zm]ҩ+ިF4? Lٓbq t¾Jhfby7N[Lp$XjV:^]݂_@/%t hYB8gO'thGl‘:E-T7bP4z&8=Ktӣk4I(E>`_nѴ8Hwmmrj/qڵޗĻufϤ}ک/2IHY( l%T"Cdƭu3|I\'WMj4V:U9J@-', x`Nx@ʵԎr thkFd>jB0t NCS-Zjl#[o{^ٻ=o6qC\aeQ0LR -Nw#5`4"y_t |1ڴ296H>B8D0ztcf5Itoem*Yg?E1@`3V1:hz\EJ_rn!!S$f*t ]g9R,J@sRH TF)B]%G1?vl<:NxGqZҷt9rcŘ8ߧw[A ,n穁DJEK֮;k7sܭ"TF=e9pcOBtA"߸gđ<S[usv*/w6iaD%2鏿NܬyHlFة/;4{|P0BЦ;?Y;荺:AtN= $γ1Ӛ&d+S3IxuY)q} $Y _`.Yo9Qw[Ânf=L}L#Jld愴## )||#sxAlÐYx#q#](sql)^%0M 1[P- G'`܅8(A|٫? X!,8Ib8S:NL2Bb &Ll_j[&tiBsH`D "%p3&8tFDH[rl>Xʌr?c+c^`4?O;{y_ bi&E^= E!%B\ۗzP*NE"&hxu>pS'rՊ R.DHN;:F% ĽA$t0/V5+t͊ +%N`<27a;j gdžq{ۭ:p2,LYli֝qAhc3.ΘI̓ƣFHtr40P@k݇ }L|W KIu? EmƻwD؉-ݻ,1~5Fy2vtBIr( XNN!LX,X -:qS:Rc̩fvy ;Z3c>r.]ƞrOz+(7Q.jy K<5CViipByӔiF,HEqS"8'եG"|EͷԊ`ϾuҾGVq(GF)n(嶲"T Rg0JvJ튨$K.̺r(u}Y<15(=mRpC)'>RpCi!Qz( Lt( RU(8(==+$]PjY*:]祾K?NX>QJKAUOn]D"i_nq+#go߭xUtb3meϖlDU(\1%u3ey5:,uo|ujӮ,r|ʹob78eI}SZbu>>mu;䐛C5rTNn<. 1Mw;XKm Qz(UI^L2QJJi|g6Jϊ012Zb|{8mrR2A@)GD/G2֭('+9j<Hec,G)N1Jֶ *)&;̧*a*lr::m\tbEl>YOy=:(c!/+5΀w|C%KF]NpFEtc5/Ve/1GbxKRu*|T,rftZ ߈uaZ2LПG_'өeOXGPA%Vmdt;){ґy$JQbk'@(;@f h !2an5=enN_&6LekܶʑeeDq%R$)5q"5N0OEu8e1!4!(O8NRrݤ8B U.Cu%~"Y-$K*0 Z!5z9ԩl|T/5z!;JyB1^ҾL „蚰;^)M)-R "f4HLR9 8Ǖ>RU,{'= B儅fN9S7łwX-̮EO8sDVYlF8&/kT,yp*tsTjfJf KZWf\ANx ̬]hGaIp)"Igu*I,RJc$$(dffi ,qLgI![!B3Ȑ4vRKgު Tn5^ӌ%y,VPRŔ@[6JTas-r%yCB3[d_Fu>F=vڮK[]EL櫙t֙}W|X&/g#+0u!3,[:NrAqIN LJy* NpD$/ed爍 G C=¹JInBU_g;Ca)rw}=%?΀| s l᱘lSMƌ|g k KP] 1 'YÝY8Rd~}dx 3Ƈv8[JHȎ(FST˽mE?fل+)ұ2bL(&^7W.noe3LpsqP2Ǚ΅)iqbJ5f4F~Đը"(6jF&diK$a(MD L*ȐJэ$'Tg1Yo֣lU.Z9J[~94`*~Mrƶ*(u&w: ":ӻ rVō}߱gظ/ۣ?mZB!d|e/[١rj B&D!.Fk\ߛgَ{5q byED(e1.F_-Ən%Oy((DE 4+cP2")bM13V/2t`ff>*S3gɒ5tǓQ2Е'?}L"ŀ1dz+0>lSGPbdDɍC:0& @(=+Jه!ڋ;#5&yӍv= t{vw ȁ #'GJMW¯=ƍ6Ecd0h:|zAkW x;׷r}\@x_uro_x\|;#IGW!K/mfNd'␹TrXL* ҁG.J2J!~ԭ%GlAp>8fmc39~nx ,آ,"53G U;TLT`ѨY 0k?ߝE{Aͯo RBM=]|BіVt<~D |MZyO3yfRnawW_>LW=|'ٿʴY8(vlk-uV/sh%EŔJh2{@%"~Ȋ:OKk/`d=Y|Ew:l$ |r4@ )-'y Sω P2OGau7VhY^RWn̞NĢzKKMmRte7_= k& Ã7FkK@BnRj^3)fvQ>77wE,UR;Wo6ИY"5OVLvR)hP:>jx?ڔv,>}G)Е}xGfXfX06x>FϙurT J꧒s\ysVrOpx{>ob,B8D0ETUW.}G߼*ǎn7[qzamg;э+|uw4n}"`Yt#tn!!Zw;۫˽E焅ӻ-zV|1Z7٣Pxx~9$W_w2BD!]k^q-;K7tr쁊V#i@#y%;ޛw}_ټCp:{] vޠ'KKrUifxLKmZ@t<U.VSFu2SRXK]a{;;~1B[cN <˺O!IJe"QEO'w:a9rV1`a3~5Y/м^۞ *AyԂ*2:@ءfm6W*޴VH2Do+;}ZlϟL]qSD&iEfY $v& &a%yJ1 O|:,mST}%@+x^!3 ^C#JNTǍUPX0E'pyeσS}`*xX䥒Jڟ_m\I_ri@*[5he25s2O5lk$93ު PLѺI')ǂFFh|-`o*o&urWo_P|X&hhy 'fxǶ!5smS١t16Ddg_Vo8Lα&mc6@-`=m#iiWY!l2:LȆ0''LCúM+ehXJ"]^/Z">,o]<(t$$Ә$D?$f9a1K$YJMTD4q Q;QOJ5FPQP ̤̀db*2>ey'(3-c6It‘H@H"*"^\㍟@$*MFG%BbDnn`ǣ"8$,P$@3eol\B-zlFILs\t1xFG6%CLhXO8sD1PuF,fB$yʌs4D(L"&@&6c 9$H DC\3&x -MҵD5B)i[.RJoQwC%`ǥإG]yw\ꃐ@ކyct x(W@clƥHZǥ@hJ!9q`6Lf K I5\-+3wvid]su &wvnwF%;-6JL~DuSKlr?^Q_}Yo)S!1P_sS-6ӛh6Y%nwYpc\dҺ}[%r%gٰ~.4ZgI{iQ$H D@8F2ko7-f34f -CD ,*CV;}eVe±Q&26=t&%P {LR*u)#kZbW$wI(ޘd?`mHQ(S^u)!. `;dML~q@H4Ϲ hPYyp8Ji./8<ďz_d=7 m&i:_s힔d諱dHl]>7jYLh8Z=U H w$^b;Lu#[rFܽ =@ق\9B3`C3\0U9s U5gIhbi y*9(cʴhXQSZlNN)V _Gay ,,2C M1K ՚IS zVޖ+X(D3&7ϖZ…fa]%H`vDDϗI9gpBj3MW.}Qr)r3 ȌBsza\cލ~:.?"-I[!ꅩҎn_#ΗMʸ^л:.Jμwk!9Dw1@B@D?Ϋ<+twJZDI$ rn2B  +/TG\ JvXeⲐZ6L6^< 8Aʬ# + ^=%kFH6 d%J,^#/ xr峭5\ȣKf1@H(aŝt"6|^:7' 9iq'2 Z -nXe4Hc4f$O( odڬgg;f74 ח |l&*'-KMfN2$][0j?M@XG }jw&K6'˯7Uuefcc>-n-XQC5Yl~4O>[|`fEs: u~uU cB*DaQb uUP~;;pN< ?xhTQzBiJh"IF Ae\ S4sFD\P󐣙fNPjĘYef/k'y)Z"8v+Vty>NTwJt-.Q:DtC)HKJI=J8}Qܶ+-q\iUjx6lRRjOYaǥV|bDQ ^I PZHRFޜc1{RӷԺ("J.}vŌ/sCi!Dg!KsE}/=.*z\n]dn?l2RV @)n(-VL F/XͯԊKR!rJVOOR#t( qى*څ0) cI;Kn m4w`ٯr ;›:ZnlrqbS%kN9mԐ4Ӥ%sYf%>f6N\͙ :YE0Jf1| < )RZ4cqbj6A5lqBj\Bkn.! (1i:vRs&9hNRH`|.-4qFxsERCcdqƹ$j%DtR,MSkx*f"][{tƊp!S4,Zq0RIJ3LI{roLGhW?m5އJtls 1&Tph)$jE*sh|J!%!e=y]H=F"ܻS;{nqvKH[oA=T9%=_`˦*VD]Gy+3v7gcՌ?Q͸ʿqF%PTȦ%7nya_cjYxS@۶فA3|?fuFm SK.lqb.[ [SpߘXqgcn)J~v~^Nn|YVH{ӹ׶|WH>pQm6N41#yq 33~ ܌eӇߺ8vv^%RO_HXdjwP6v.}7ˢ;~1Rl=$)O$rf\zU>e&u [`2=rːv~ю7;r/ƕW8'cxL#44u14^51h:uJq=lIsC1Dm>E&6Am#a.psy׆gq, I4wjר9kƲ[Zdn*kys&yD ϮHw<)O$&yZ/I7y~4ʗsqpwĔl.'۫j#"afdl,ɮE  sҳ'[vٲ jˆ0׋RETx7]Xqo._B;Oݟݑ __tow/I#zƫMx~~vYtT.V?XVԾ>w|˼Uȵҹrs}75/fWe`Nۛ~yq=?4Y s;7ozdAw7&;x@u:&O텾e㜺vE n8ܩ; %Nue IERvHƻw%|Uf( &\Mz`(faHΌovHwiQ];N爎? !Ukͷ(_-B~ ewCd:կnto;4 ޺컠=^ z]Huc:NX.K7>MiVP3uQ#hۿrH?;?1*nc=L:?2([д /ozfoz>@ nW&gON?};k\% l7Wװ.b E0[? Ɠ ^>nŻo&r,_과-?o-?ӛ|u 滟:`u~]AaNH%4q:wy q0t-M{@Bq}LZ+Bm*GR;aq5p4P*b=~YI)2eK˭fsX`jq0.%'`\ h`Z]g 4M;03 Q&<uP e J&8e6̀}g{xAâv it'I7I6 c)isZa|&NeH*. 3)1 #j Q=Ҍ彰 <[yd%Ew F\lalk՝LIRF0p&)F yW,"1a i`I4Hj=I 2*Qg&9kB]{EF)gqYPU<@ۻaXKC:Rߕ?kg}VVlaj<]jyna35 o'? L&a  ;Jjuɇ/:-M{[snб9l`?_7YJ9EN|8ο b 2-'dZXXǂT3`M5]xQY&Ȓ(| |1KbRK>a0@ηLJE'o=J iDJ=JmJ:i|0>226K k3g|;YVCC2]izE"^P;+BQ"9E59d:Is9BxO-:l;D;+x0^g 0wu{J[QLu'O]PT)|`dhų"6Æz+%^I9[3A ˄K-iV(rS֠ !"\z3BcIedSEXB!QGm&G(e ybo `w.C7 I!043(5< _.`2Npb| p(h.2W0eGrbF{1끬%={1Ŭ *u(J34Xv7F)w ]҈gɍ"*9W_h8& @NulEٻv"3r˾_W]WSejoA]z-kb6Cky6#(UPܚޏ@5]-܇XX{pZ to^PiVhM4aZ<%dt2"*)%jx$,ԫPXƆe;5ò˽)9R (TG(.,;7Z tUM<.,75LIXRkDeOe@w,!`ZkXϬBKDjn Pڶt`4=:(ǑD=BWL{-b6rs;?す [u5?UZvD,0nn5^_KmI$H/>B[y ٜWñ$@5R}N͝C L筶Bc&R\RmjkD}8l&=|1&`#tGt: R"AI!G1gMCg!×,CŢ x HSzL}="`Y"M*8Z$Ta1 9&jG>ِL6dC"qpv3^jb86Ez%fY7ߣ# &)P>CF p,sO:R|mXiaтd0`=Nj1*4( ;gMFLjꑻ%2rMO|p_)d _vh~7JZC3UC @K==ᰴ#b[!ZM]䋌wI{;gT\ql5< / 썮SI=Qf  ldKG׻Gw vF; )3gYl^ǤK(꫓֫n ?woiZ6H[P98:CeRK鰖+5uETXHO ĔW.2hK#i QW(Gzbziq/;8_&:H ׂ{h)dXs=yZ"Sa2a$,噓 YAJ)bVXbPkV)P:xT+ޢA/RʙƈxôX0DYDVqMA`/OyGXR!]8u_v|R7A&!b0ZaH,1u05Ȯ2j5CȅbQ֚J|WjYT1ܒVUMmմ` I&; `t%m=w7f>Tf<}x}߿ #ua%>KܧG6lH6`FH)M:|MfԨoĔ2LiϜ%@`̙wdT_N̈8y/񻞔(-emcnìڙzlF7c6 OVkb3Y$sbD1hsB.(l/KZ~\Qy>߯}Jq kq.?6Gn 9awb3GK?E!D9G!rt H ^9&ѱWxųV]Ř1iiM3DTi#3KSRRH3,L)K5NjKr(az(.UQQ)D{Y)<{˸gX(۩ox<[1dB`N壘T~xlfv_uTJ?ۦ'Qv*_GZ٧l5KbޥXMFG5$&ۖYRS v.qZ6  @Dž3)͐PNII9 <Ɉ0%KG2-Xk{+s_߮w_ HHQZ2GңwlQh <(*Rʥ:Thj$SL*DJfLdPhAfpaux쳞5Lj0`䨤0$AkdӉVi#R$O`2!mA5㪃6{w{ӁLp^ _)\J]p^;tr3Z?fY˳]~Lyr߽g=_/y*Lxߚ.ʧ߾^l':; ]yd6e*@y~ &bKMEYyAxNGym)Iaцv#vk˃}Gv8s7ȴPozڭ =.QS{Xȕ`/FņlEz*%pYcU^W?y> %@^(ՋEힰUr=gw?\y*¦k;O'LFZ%VV 8?dSyN2c\)a<)ij49zOQzOe!VE/Ta L G"J ZL D5p bJL/4ͣ~q.S܈ߐ w8ONO k kMlUyY ;Ml (Nl,H erX-si*hLfeIF3amۘn Sͥ<)#VDZ P4՗e%.C!AejE.=nF٥D Qvjj'#JǡGPr(EҜj}B鱣ERlmke3~N͌PjX6W%~]+hhdwŻ1w7)n-_$?~m|| P<]T1Wo}od6x{=6DB)t*i=d-cH< I+u &H,R}u$.rC6BB= 6E Sp8p n'~2\ rV!GYz6hdDMXؘoqCT yJ {"5ZR~+98ɵE^Rt 2AQ,I=T{6AKpk7+yCpߐR ߟs~%-BO\ JE[1*w*:*z>_F=ʆ3ޓlN*RR hE^wU74;frP[xuLLޟv2рbNzdD1Xa;O@i; t1x@C:%rH  %84s̨ML &\&Ԧr͇{D$XʆQ(TsCS2TqJ.ZTkD:pPX@K$C˶T6 I-Fywb_CFY7=?x -G]Mt;JjK'';}'Q]Dfd $Sc$-\ ;j,Ht%P,qR* UzdY Nтڐv 6JI5D$]Pqg?a J9{[[uk ںVv"ycB3\Z)mRbtJex[J^Lf+AaBʺ =m\}N6!}|Z aI-r/ަެ{Tppz3xb(5^Mmn{y92*yDѻ^j:/%>_`*=y'N oɝ<YUz&cW7% z$C-h./$ dJ%0ʊj R9jBY$JT(Sq󫽭Uo5Ŏ nf) nV4 FѸF1:l؜*>EXCt2=XS(#g"$$H.\W&h*^\~t976 b,k1ihZRqrM'qovxw6[QdKFT xxJ.zC"UN'n}0UsUtNjP/fjX[PI!vg)h~F z-q/k@/'aͩMWeك K.KT +m -g#7܌!4UHe4-tz9dM8ިelrmr27E9ZC?; ]{SL?Pr+aQyA~{3,>vyA({Gym)Ilj7]h2r@֖1i:prnoDj.!S- FiAly^gkugƘNDӫkߍ&ӻr(DjN˳Q!2M.$RA$SjE??zLE1kOQ}YZHz|(e y!Q"j GR QW¸RCHqPpi Z FI)&ij5a%%D "EBD=3̄ F 2 ],ƍ'Raͽ W#uOSzYJ K'ZpJֶԞP!SmjDD$ͧP!#Ku  N&~쮝̡UJBXvf|m:ۖmը(u)W:->Gho [~,qz#[ޱ~P…5 ͇'JD[pGCl%;CF #boO<TZ)}9E-I(PCi<% aپ jDJj"?pY$,TJvr0M Jg\e,qkFRN%d:J`Xe21,Ev& z?Wk=Ÿ]ZV dx| l S@2i\i\y*iN2942?9LJ<<c"%Rx(':g%3K0JR\jLh Z2L^"  rviw =͘x_JP> ] xEr'l* wB#R@E'S.?&2Hd\>wUdM vioPQq<{iЊiehReth9 H̸&x7@qsVy0놝(L ݺ{bHU"(b By'_r}7A" @; l  #Ļ 6uV]{AOBrFaѯ8wX;s3EJ ģ=|Lyc9CT~[χZܡ.\,83L8,p6zyiw¶7\7W<ԅn&wvw9SaWقʋ6oԇߖˇoD̐_R~Jŵd=L|Χ|bn\Ѻ{=Mo",!ቭ c5kG^4Mc[zIƤ$cdL=S)WzKi "]HPf߇!=HXcvt?zmjt4ghYLm5]`u}=EP- Gմfj>b=RFrJ1%1%E{MSi>d-SC;%xuuV(wg7sXdn=Wޝr8yؤg4[^EXҀTTR˫[)#| lUO5ķÐu4|KMadʐQrgS:% 3F7j.5'  0FEB/)jZ6bl,5 >zgt#xƐLLg$ 7Xr-&~!LBrFaJ; zX@'!mui-wk!_9D0xߊs׻ 9|ȵ[&g"Z@WƘR 6&B{aO"4L ce.vNNS Ή%BKo9A?+~z3E9)i_ 34[N1(P|zn:!`Cܐ=<]ZR`&3ײ0Vgdf˭to"l"w ׷YM6JݖgɃ , j&<#z>Ž1tz54JC5$tpTLB&i>8r^eXuiڙO!]W;rۘOi 0ɷ0IB#kʠ*l"-f`GÖM;ֈ~:F1vz}jr36DOu*Ox 8<6iS;ŷ;#ZůSd ?xVFۦfufB^5V[Cb1ןʷn:TTX*aXNosI"9UJ) OWT'HTlZKÈvgv/ۏ?Օ.G.Ko.ս^V,ʰ Zuٙ4f$u4TDݓ0 RP܂M d~g?Ͷc5vtQcw9<6d9TMO vfDHRkU ci"0Tu YINܡy6Ea.@\skhkqZT_.~ &Yn^ܙn5ϭ(bHHZ)eGfG匹YJ6w^ _&?zQmïԷrMfTe42On^>:yY+Vs?Wӳ`u`Ҽ-f-ļ&2a`J5P)h2e )UܼY;zj@*dxD0s?"P?g;`K/K0PXN{j'n,w:sx`WOQ^=)oetҩ,CeP` CqGo 1|1F&99FR3U.]7u {K7W&u97>TB2c%bN'U⽸0%dh{3Cb fQ#sBis?PޭXD Dœ EYp:I ΔÔ` v<vOWp!)a|̭f3P%+!>öڶT ~_|,"{{ecɣqyǏE5pQ+E}׏miͷ7yd&fK3 Eϸ7@8``/؛y{lF1%_}}X Wǿ.UxF9rZvi#! :ZR_UOaSR*fe+JI}U}(=E1$fJrCR_U'R`^H5* JQz(Y{SBGRPZJ-gF_%{LQ(?R BϾQ @gD `(R#5Eȭ8QʄJYK62Rjsst(e@u&Zf| WU9uikWήKb@׮JץR*PJ`d(=$UUj!(=q2B(PO85H)ҩsUWS{j3^8R?s5$e0lIwm~ *ݮ 3K=a,:yt4%߳c U?78#쀉$-<_鞛_ maA:$Ay]WJ )\N/_NF N-V=%weuߗhR»d³=̍Nӧ1MzS| k k k kdTkA'ќkkPʞ6)ʙԌj@ syiH"2ɿh~.MpΊ,6G.'kR@P@Ӛq[Q%xqpc|NX12ȡ@ $:)Di1*xJS 2o xC $tBo/xB6ҡ'usc(5yB B/ !J.'d d[ΓNnmyw1SL%F9U"O4*K5o bJe…RXܬ3Z')(|ڞiC`7v}o mÈ{π#}/\DhT(rj$$f8RL10Ayb52YhJ ǨQZ&D2xzI]~|v m+,h\0Hz d2e J=LݖeMY,'N4Y_XU,u桌wnhSiw@ܾ]5,0&bQe7w/oVXk} y^3n$Vd;D6K}JI*DXq x^ӓM? aMFV¬ebtNCZ.*O!o#zuyE?u <#;{qYƵM-ARU=`1(# [êj3ЖD(3pNqROߒHHn jի{=ThR$Bg`ơVDQ \E7Z;hH{t~ZzF0*E~Px#WDW]mQr"d ?gIx<أK X(;kOtG63PLq.-=pѓϟϦ[R 33YalHkeN_G<=.G;f<`?z4-ٗ\mRȺ<*w/z$vƽ^[#oM ("ƺ)խ;j2^*քlTN*m;,2ݫ%js3h*)4fI 1#"NuuX|>!C _CL5y(,&TBC6 -g,`95נrz78|ӻ_ONP]gLd |PvjkK!|lld`\lۚC/Gqw՛]cz_)|#ԭC[ G 7s7Tesppof(-I7.dM,j<cn;&||"9O/ ~Kp8ϝdЌ̣zDO߲c>C:}4_.{S:S)hLJ~3Z`yab&iһ\#!@Dp[ 5Bʚ8Bhe{ݲAjA"=s*[)^ ԾpO"Xuf ѻXQ/ f;a|4q:QFu:8=*K@PǛ?WhoA`_;YR}0Ryܴ̎FLj^# KW.߶$ Fwuȡ(+X :XI0gᘦ 8bL0A`QNL!b\ I rF_Fbqu`m!ϮIeD2$L6*~+ecOB&\\Eߖi`m%ݣ~* $u3=r̬&l+xQ3}X崖M.UA-G~GnQ!* ޼XNOsUpUziSa8u>mXxESòz (M* );$A") io Cɗ"*X*kPIi6 7~Ji֖Qv`J@F^ϔ[󱦖|WyeQ+9vNtjLG0}N4$O&xy)o!N,w]̺|Q_d}qyd?l~#c%!5  Dbi* w}=Å\X\.!Gp|K 싸bY O/8>5Qj _bk,]0祏gfWkƟ,r :;`[VEq " =$-w"Bm9%(e 9hs.m@A,b<2c#p baRT%tF)GTH,sqD:\W *'Q 5pS$*XJALЉ,KQ'ʤ $Y6 f-\w.J|sQ?q7\]4,[|g$gӇ߭z!yivKn߮ h>+RӼ7ᣫ8DxpXԸ߹>p2-qz?OUʢNիztxG"$-+1Fz:aYx$ Lj"A ><z}UgD0$G (,yx銽Ur@ JQ?d4Nh~jĤzVx2>5}J oX´N{|niwKYb2,M\GSI]ռ*ѣKDQ~H c _c;Pw Tbӯ[^?[78(/:l/%¤Ea8*T r@=B-#pq"Nؑ+./BіsOh{)0! ye[ zȠO`5e #@sijAB.~b&`E Fcfq& *<`nFo*^ؼuBL"Qr̴֙{^!#S%Xk`!XlFP I1òUk@m^Ojʲ qOI-]::k+@6PD B@)QJ3EA!cbyJy }Ǹ::l< ʐJpց|: LB?\,XyC%rȑ %)4Xw"Oz\K^M[w5Lud2k0T&‰'$BEIA-HL % ID(!XH"$ HCYDcžV1스?gyؒBU/ըi( %"k,CjY$3vwQ:y¼k {J`RM<hwrƙ rZٗtB֧&=s{h3Na,쥥?V] gԩh:ZW߆5?Mfs Op|;}ed21D!He0$΀ gBL JBfYT?|>5up;f`7 `6$)1@#kӽo lwQlnu`ɾ?];Y-48§&g/ߪcl$qeBE<">vsֹ? "@QctSȆإ$aJOoG/M(=d~t=N)X/&<,„+c嬜,= kb=fZu`9~SX3t9^խ'c7 7Z~Z9=+83='&X afV!?f+"dk)Zo Q0Kf'{>KdTIJFZ )ݞ2iI#23FhQ$Im7_y2QE<7&,8є8c:L#N0$nil̒8T&HkN-dBq0G_(fXlEgAʳfZ0!k3pD,N3aId{P aVgu 9?s[Jti@(ļ?eD8ˆs(eAG*ئO懧XpVO#8SE#^s YՒ@ 1Z-y2r%p /s5q0 SE$Tƶj#VG`l= 1 j Z[V씻d>ǯWԵ\G_ל]oW .HEY4!GKl6eKw7A#gRKC61%YɭzXJO>J]ǝ3c1?8d`#R>dȕV|7leFO~v<Ǵa  { kyd⳥тdH>/[N/3}VEPEPEPEWL44uck6Vm*Fa&Dm?ZSI(C42S.3~|>(Ҿ)1#ۦYYQ%LnGKbrFHAd<[ġ~9A"I L,{^c@ZUf}-21ML$dRtł񱖧^HAu _ˊʶ/x\.ހ~y~]_ݒopK葛͗~R郐E~9VBF5e4Fe>ٽ޽=q/VM'pWE.y'3%Y!G"!_F[Pg½UX%^pdGϿk(_A 3vp&XF _u "ԝ]XkBfO'wOy8ݓv #FxՐj+DJ9)9"[Q\5CS2pi{[cEmЕkα`:LϜO QO\DfPMlԴw8vھ@IV0~x$^)±'^X_~ n3 `p, bŸ>VF^X? k>F+HqoK{7v\{oTۅe KOkto}{ʀ2KeY8%ʛ|"Ȓ.;NOdǟD!`~3Rǟ$];0 #u%,XJN++kRr|Tq`~%O>6jjX#WR y 9Ap֭s[(Zv_)`>#/ n|D9Ε4:b: HNcA1)g=SN3:% L``NR'z6%8aKAݏ^NxpF6.G=<8#M[]tVKq#ThMy?!T4*VGgܧ'yLt&>|'A} -0AT$Nß:ݩHOo&&:e؁O]" a!*˶#u ljr ߊXj˵1Uˎt)@~lgP-x?5!F,mme$J@5HjJ j*[T5٦,XF KyT4̀""FW@,c[d-][8*g\] )T_ʿ:Uzjo@b,#PBedzunnyq^UQ9G㣸\_j| 5[<ë_ %U! +C/˛kwnNf[: 랲&,!i ??VMy_,ƽ㞕<)߸*W,+7N䷽]fPw˕A~#Ż̻e3݆DlG;N trAqu1?`d33Zz좔sW 1paQ:/A]dw/zMnyqbuFG>"eϠ^z:q&=I:1LsB^.$>4skC:A dZաIpjD)ntJ9`[5TZWf-5n_Jkam^~ݱ/W )Ý T[MmS7b#*G DCAVmhh. `6:P}l>¿ɵ7K(dé~%S/~[Meyuᯖÿ]Sno[dH2B_ijU3mkW#Mֵijƈew:T_^;C( Ʌ󽹌i"%ޡ(-{j2瞚=5y9ZOD]h٧yս6>nů((%&%u\G=WK'ܽnW1N& W*ebTH>M[ NYQ@JVUXq6/5ߓ?|MWwg?_^]ScH Cgxcݠ譋йU/#"+Ole6JR^e%(, l]WJ9S)PBzu*;HDh|O`D&;I ް(IOxD {εKBmQV%hiLEѕ+MDLB;QZ}-(7g[%'m8}2iaAem8|ƍg, )f}pm5 d"%XR^2o[_d]&xMTEi*ml=MjFk҅q M[}jE(fۃ҈B3Ϩ;}_z0ڂ b7Rڂη?x͏"ۂa!_fٔR'Sz7'=W16_uJ 'n="hM >y׻eN»e t2;xvѽ[EX6|&eS2Ҧ²%nz~{t7/E0R0sDeLEq2-InB5 rlEZUY{34%Az@5ζ@+xͺsOIѸGX8[[Yb~\_SM<*\X0(qQR֟V @9&(Dgظ?؟+1k8Ϡwt@[qD56׷G"nq^Jrt\,qۗxm_} dsqMzPa>dDNK$Τq:_[=='c _F2rSqb|4CɼZ.7iPcHNkùǃ,=ErA-rgONMqI%[`$JK^OIUqHFLĮZHjylHRH9ҟ=*PIbl@>:QC?}LIetКs}cArആ`aG/YJp:d/mݫ>.6CUpOboZYVkźk,BO6OKR 3p\R5pe$ ,N0.HWrjL'!:)LOqB${|ß\I"$"u ZOw4i䶻O@̝ xH9$I~Cw7.O] ۇCvs\m1g#/&k@+d=5\ZÀEmOJcpVW9,DgGҰsC``fn.ožˌ[33jDtH>r;".[\Li5m[ݽ%d CZYl ↵a7%ku\rh;a9bKddkTݧRߥM!kOo˂PRtӒ_~eEYOԌ.-̔1ӟ!2Ό:U@|V_tiL"xչ[an|]GĦf -RPJ`hrlY[F5\PV;.1 K绻 ?jJй]ݏ<ez |bjX!CYXKjq8㹒 ʄ%j}w+NqP-rd!"&*-dwh+ĞNǿpؖ@74\T>.,3fڰ@ qչ0VyKOզHXjlXh2ZpUr{E_k1=Io0Ao*f$!I $AWcx\6rh 61ibd떶wtL軶4F91!j5ŋCh b̕W*=hC0PǑ1}U)UQQTN{|\gaWZ6۸ 8j]npFTd7$cL>U;ON<BJk+ɜtpZ6XۈE~;&{KA{[I%YpzepiCsE /쯯q+|Q%T,ڜ-I5kЪU᥋O蜽Tc逬5*>+j P$FDP&jQE\*'J: +/75<3&1õ}Rrj"\2 PX,MsٗLzZL/|z)e՜Ja. +0rusTp9gײCî4sv"k%WcFhGQ̘љf2\Dô6O\F7jC2 AbieJ, 'w>WB2y$4198wnѫ$KIȴȗ\4^ݭȻ WGأXsJj!_M*]~}ĉ^)Z2nJVbhsj X A.'\e֫t&v"d#cV=Y %E:@`$YP4jk+.%Q\BQfv(nMfF ,[,}Ok̛V֦"mi_␮mlv챭${A1h0$TK^fr Ε uWTn]I"wU'R}GyQD%/ߋ,KBs!^bÍhV0krgX^m{f߫cT[/vWS D!Of%d*w۾~66[:/;K6% !z0VԮ߿eŞ̙FO4z&[ m(Oz1ݶF TgO7y3]T5$ۭQ"a(S9~OYgb4SBž6$q _>g˃Rw:qHr>2&+ewqAOM6Ym/q>g69au7e.䏸=NwSd5+Z2W5Q7=ـ'直B(ݏsI~S s~\i0h;{b7,r1b JHPBRd1ISd` SBq;n)}~|FT"W q#]=_Oxm•Nq)QH˨rCQD[DE#! o?\o\* ewMI`*Jj\2$Fq J`b>GSp:S>yl=_̥i[s6o;Rs|oI׎]),)__ִ_cT:TIQXu ibmk?þmv\>ӗjEKY|#ɭ^0&Д}4  +dCjKA`Kߟr4יKƅR%2]~D?7m idz2YfK0C't$s\?/M-`tdQA-P ٥A 2׏ [J =ħ^i7OMs&r?ϟe tbHN}ɘ8~I)mLj)y !~4FcMazsebtX\&;xSg0Z]xW XtLNylͷiQfIP~^L^ʄd<^oX4?1|tfVr#@ #t5>(~i^15 lno$Dk@^MI0hnXTIÝK}&Kb}FbJK@Y"7uu>Jt{9Js#MbV3Z6@s_ +)BC̹ߗۏ,i2ӳⷲzI\|_p}fNNB(-ܮ{#FsP"B(()a[f9^ m[a>Z%Ɨv[^zr֧'bURlvB5C;8L+"}j4N"B{XߛW$[܈܁Kxw0%͌*a#`1oS̈́f\3XM,[j&beѷZP|50)Ѷ{_ VQ4S/P=law\_q;3}C >9dzf;"|C2ژ c-&Oq .J p^RuSHKlPd#?cm#(>Qו1`^H0g>RRpyD#sL֡β1sv$tXVQ'54%@p T)A ٮ!ܖ]!ތ&k }0u[CFH.RF~FC:ƽCrZ }0&n8>ibr 9нM_,f4O~a/Yā9:@(êk;0(z`oPk{"wTφϾʑݲعMJM-6%Z\uU,N3/ VSY+Ev}l:<=0 @9 Py`7y:?q|]LNӘ9-r4G::ͧ痢@CЈ}0<~SA@m ;e?Cpc\ b}ۿ.x-Va5'Ղx4!+aH]{ vز>%a*0Qΰ*Niqi>׻p6^U0&֖:Cx+Q(~_Q͊} Ze,/S:xEp9u^Yj@cJЖB0CX^X4=[(р\ ۣh?_ Ci'!s502+93 GLM[1V><@}75>SgyY+XTʱ_Yd9aQ\'*,D Ԗ0 (Ns7w}@Nw%ѣS:.kYyAr3W;fF=C:'2ѸΪy;2RI).&D.ަ6r;=v<l je#`e!Y^AeNr0>?*K M޻2eL),Jn!P.e)cDnӯSwOȹ-+mҶJ7jLi ԡrD.$Dg:YrUocXZ=%Bax Əef[rzLg5Q{&#҅tx ͗c^rƛQe<_< XbDqيvd% iT䲯qliJ8#mM-Ty S:?/L*;. ZoeNg|$i1XLhks_++Gkr,C0(`5ð[7y:[.jR]2?+|q:[ZcC뫰ẇ=صs,iJ\NG8f`ӮٱX>2,M1"%8#GS.(b>oxW-cV|.&S^2ܹ$( " fXeWS%JF?@ZbR4vWP:Nb'St>A17h_V]66//~v|8 s.ԐήayE\ﶱ#߿ǃb= WA~I}8;~gDCĐL;4ܰ'!X2*bo+I`Iq[XtgKl#>w9׿+IJB> ."hiJ,oʕy='myI`a5J?\19'}+m'6(, jKʐ/ow t( YDfQ[3%pIzE$.CFǽbߥ=%i| WAϭ$/EJʑ`Dgvgki.{~TUFF#3Ct1GȧVF2tې J4 tK~<>b z~ gG+s a&n{qCK6#IY6:m6e Es*C26`m<܈U o-QlƼ^8𤠘Z}^爥tu(rid3Ǩ/_}慟ar

ðƷ@-8kj2@aVY^<:'zΧxb5Zbѻάd;Pގ zts{ˋ{ƙl0]bT@1|tLi$:H~<=j?i'CH0MRvlpؽkrl7d; jלcv~R v tyggG)g ؞LZeXW^zsµZ .ٗzӎzQ3hG7j|5KlK`kM4iL{;\M߰\fmb+40<*}is! \-.g`*zPr P'fي#@pLr?՟ mcS64ǦcSK=-{:G-DMtKk/ɏ@ȋI1N2.,bq$Ed ˑȔ0֘Qqy_V^Hڦٷ-׵ D{ >ᴼ*Oم]Krˠ)c[DF xCKex%hUV0',m+`z0(#˜{NdX$ W eM"ӜY{+-zO| 챶H4/s7 \d~ͻvʼn,SX _-ӀS!0T̊P((X/tnfW:vʍO<}XC2\-T R+͊0ˁ3׶"" MZe[ݽ+Zю'~3-0֥j!ɴLR/irȜЊTA5O!*,m.FgWxix6*F,rT D9eA+`Y$xOSMh)36`"|tA&`Nxaxa uyӮ(!4>4cZM=oiN{%dIz2Ok lbWJq$)6izSR&ĉ+-Be(]N=Qd°eemhiܖp&*A֪>9[\W]%t6i]]uP%Xa Ad8(6)sNTDb \M~(sM\w!pd]iݥVPfNɹ3]& 1Uyn""H„]᮸vJ.E`pM`P 'کq ;<䎹oNYB\A(ߠK)}HcbrN2-D0OLd<Q 8#PYکδ=*n+%XB2~(VcgxVksvT޸mD-sWqX;[dցOglx !2Ddm$I `<"/a5;m4 "26E<Y,V-uĕW*dYEt*6?.Fu ?sL4AF&_t =XjCZ敻JfϜn4HGbj._Jg-]/ bi c+(K%*sm- ?y;"sz?yg*KR&\$9JkN*NCEG3c2뢣 DlDGF9颺eX}B܇h$R\uZmu۔i)!1H*'"1njspsP+psu";)BcB:3 l{Ls2ǨWFjIDMLe,rgNJm2&&k:D&CNH.4\LJ!$AJ !(QgNlPhIbBt&F !l.:"eӞNG4Ln)TPP-ZP}n͂?[sVF*x%*AGF$ Ix[╒¤wEa~/V%|dD%=H6=Oۮ^lҪյ{bՄiN߆֮fTQuFk[x|LrK?-7hF$3= Z$O)Y"1*CԍQ+Dʮ6=.մiqݝ](O^SQLZj<S7ifZ`}W⪡Ԛ6mHq!ZdCk ]]~}9 RE%<' `(J-i< I 2 |udD{$)O IBĸȉZ sLj F!%@P_j@UYVkdA 21&GD*BvꐏsϜ~FN uT I@I,=9"KX1 7dj]*Ǟ=G^ GVG@u{(lyy3 L͌z#Qu+\-\}ufo5ˌqaZXȯ4`dOYL*S=/*2fpQ=/X#W3d8Estb^ {< h81=+ BrWaUԃ7DުGQܨ}}^F˘" 7'@ H)Dh󇢏t#ui! {F IA!P!FE?\3-.>iKf}T$}OE22q*ˉDL>iH2IՎ'%f!!œ LT d:3 = %i\)˵Can caK2/ S'01eOſ8rBSl^R|E*+i:S_syxkay(PwRJ \jΕ CÌF[4Dqf\%^zZ9HRNgOsijBsj@QU#$8\RvQo`~}DmJBdB-.yHO LJ9T Յxݾ{MIc_|k>$܏d)?7Ɠgj7-M;at1Yg`'GU{ÂV̆C> 6ĈIwwo,HŁ|>qC_e.+mL (rs2 IQKi9U?ӷߐZ$ϻ~[ENu)D}(>vh T։;~QҊIÝ$mE{=$"TFCN P/ky_{:| ݐ?Ϧ~<凑l)lAHhNظ4}S8fo37;e5ō(ĭ&VmL .v>QY3}< s~ v 0 .8_ow?QAT8HI%NIE;[K7ЈޮBh)$Xl5G`-#׵16)!" [֨q_mJc|^x |t~z6y>՜pŮrw6G#DTP߮B(S䳮A)b7x0o <9.SBdgW5HhYTq+pxCbPzx }: U,]t9,P9Ŷ|*}V;\mj7NԚ1 `+4@bz^!Ȁ'YJȪ:Ḹ(ad{i8P@v tロW)T)I \) CxH1Z{V !^.x2~*cGEȢ~j 3-\*Q LuRX)gnbF$ r^T4/|3`T;Q'B~t2ޤ+(S#V+f!J2 ӄ ,[04LC +L8!AU!I Iz^צP1r4\/:OrFQJjZ9TЊH9ɽYƈp[uj(;j ;/UW*Aͬ$6'4 +ؘ4*K! \Sȅj=Sxk=2m" $?魍v[}m\L$ MVV#*'*RgA9eN\pR(m?uͩЉ'Ԇ.uFжjvs=cHt<ً?F%/b=DjaV'F@.P~BS<1]1%dxɻFp  [_[0Ds"h?!ÿ7ob&7of;oӻol}+ }p&g,>x^S5އNnlTcÀs#V&-Ofܝ=.t4GR-J.V"]O\#_[*dj$sq3 uQ5%'VV+aEsP_qC%^8mArYǡb(hmj$#^5HCM~zyWhȞyWY_j#0I%NjCcVWh-]:2L.!–[KT2+4t Ur gvX(<:8?ƋxiA,h9?a6pZa).U]j0wxfAw<7w~R|.dzwWO9d,FQڴW 2A <jFV΍Z}on_0 Aqڜ.%J*j_T6E1>^oR2?+pۣUvoeZpn^ImvC}bs)K5ʣOJS/=GԧYd3EHO q y %dT[gj ͦ$a{gNF JL)(P!R  C)3,kPG-ZGJ֬n?Cv5hM}3be[;w@9UFNzȩhBS[Y2МK -Br(^@}⽣U0ҋἋ6M1~^LF xPiBM0TqYB@|(Bmֺb!ҋ9e}2bPwJjẾ>w,wmX}mg MFjuiv;Ӭ:QB1 ( H\傂iҭtuc>FT9dy&m(h ]bBvQO ['K8jgs-8RrVʛ{p&c^[Q膾CV(A)R`$fZE:pc'a6[:i L /.ہ M[>Ǎ)Ilq&ϤM~qb4 Mǧ\7U(G\F)זᔂ XRT\~JtR, ӰR,ݬ4ZY.VzVvl˙*'bI}Sr|KJY)!NJY!/cy[-g=<&IX)sS !o[6vi2Buh(7 5QHhEE!T_2r8[nRq6lDf8C |3uYo**lS0 Dx cfw6*F䟽O:i=gԨ]d2<Wc8F_Cp[|V;ɇH2)9+$@A6kDBh .i1 2@ 1nlሢ"EA^xkA:u+Z3Y <羈8p"FyLA@!Wi9W J74@Eb*،_̜80ޞd~]Ŷ~ ӧkQ/DC\͗2X~㲦+2 lWo>IgU|ϏO6Oe{d~zw?͈2BnҮ#fo®aS~o WCɃ4j}|H8jaƚidb UU$W8IAF~ >WZRPI%* p\.DX쎠c@6C~/מJs<%UN=%Md/zZAᥕ&8׀eZL:6־^%>#UxnLݻϿs>(,{W?C4kۣ:"\B ׏x̏e12Tms47Fmpޞ]e?raĻo05Yb#A)B4`2gt ݵ!lQJ bSLG) p08k]`r /֏nث/ >8!aAۙ0/}t-Bze8_efثjk*{$&R4;TX6ΪyT(p  ʉYuuwT>!!l&!g}R PIoHԀ"-bVNHRɚzT5Z Ԙ Fdj5b+WJshLg_1^UOv1<2b*nLxSUOz[eo>#6hd+zeyz2i%8b%}l~<*JfH3K)rM]N񅙾3I}iIYK򍛨MNuT)T/.:[iə9*8a!߸zٔ8}skt#ȔG7_ \LuF.WUc9@qڰoD۔-qu~-K ;Sv~@cdhOXD_M"M'$a>.q$IQ(@("2 qHl-}H_>͋,#>3̻C7ԈdEykI-;o 꺩K&NzuWmS, Z!g3ҷ`޽D GR(.[=:~ Ydbn$i2˕Ȃ R4t@@\5ڝ5eT var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005433202115137235003017674 0ustar rootrootJan 30 21:14:20 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 21:14:20 crc restorecon[4671]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:20 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 21:14:21 crc restorecon[4671]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 21:14:21 crc kubenswrapper[4751]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.695681 4751 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.700757 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.700926 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701030 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701122 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701223 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701316 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701445 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701554 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701648 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701738 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701827 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.701914 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702012 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702104 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702191 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702289 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702410 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702502 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702589 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702693 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702787 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702875 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.702962 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703080 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703171 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703259 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703394 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703514 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703611 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703716 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.703991 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704093 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704182 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704284 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704426 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704524 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704615 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704708 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704815 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.704912 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705017 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705109 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705198 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705286 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705418 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705516 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705604 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705709 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705803 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705891 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.705990 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706080 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706169 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706258 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706389 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706490 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706579 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706703 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706802 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706892 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.706980 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707113 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707208 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707297 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707431 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707533 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707628 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707719 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707826 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.707921 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.708010 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708255 4751 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708407 4751 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708520 4751 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708617 4751 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708730 4751 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708827 4751 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.708922 4751 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709018 4751 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709111 4751 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709203 4751 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709296 4751 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709571 4751 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709675 4751 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709767 4751 flags.go:64] FLAG: --cgroup-root="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709858 4751 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.709951 4751 flags.go:64] FLAG: --client-ca-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710060 4751 flags.go:64] FLAG: --cloud-config="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710187 4751 flags.go:64] FLAG: --cloud-provider="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710321 4751 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710480 4751 flags.go:64] FLAG: --cluster-domain="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710598 4751 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710717 4751 flags.go:64] FLAG: --config-dir="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710817 4751 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.710912 4751 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711032 4751 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711129 4751 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711245 4751 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711406 4751 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711508 4751 flags.go:64] FLAG: --contention-profiling="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711619 4751 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711724 4751 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711819 4751 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.711912 4751 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712017 4751 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712121 4751 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712214 4751 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712305 4751 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712434 4751 flags.go:64] FLAG: --enable-server="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712530 4751 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712631 4751 flags.go:64] FLAG: --event-burst="100" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712756 4751 flags.go:64] FLAG: --event-qps="50" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712861 4751 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.712955 4751 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713049 4751 flags.go:64] FLAG: --eviction-hard="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713146 4751 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713239 4751 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713364 4751 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713468 4751 flags.go:64] FLAG: --eviction-soft="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713563 4751 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713681 4751 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713779 4751 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713871 4751 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.713963 4751 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714055 4751 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714147 4751 flags.go:64] FLAG: --feature-gates="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714246 4751 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714377 4751 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714479 4751 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714573 4751 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714666 4751 flags.go:64] FLAG: --healthz-port="10248" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714758 4751 flags.go:64] FLAG: --help="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714850 4751 flags.go:64] FLAG: --hostname-override="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.714941 4751 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715075 4751 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715176 4751 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715267 4751 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715436 4751 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715537 4751 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715629 4751 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715720 4751 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715829 4751 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.715926 4751 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716020 4751 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716111 4751 flags.go:64] FLAG: --kube-reserved="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716203 4751 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716306 4751 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716440 4751 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716550 4751 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716646 4751 flags.go:64] FLAG: --lock-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716740 4751 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716835 4751 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.716927 4751 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717024 4751 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717115 4751 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717209 4751 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717311 4751 flags.go:64] FLAG: --logging-format="text" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717484 4751 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717543 4751 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717556 4751 flags.go:64] FLAG: --manifest-url="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717567 4751 flags.go:64] FLAG: --manifest-url-header="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717585 4751 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717596 4751 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717613 4751 flags.go:64] FLAG: --max-pods="110" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717623 4751 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717633 4751 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717642 4751 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717651 4751 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717663 4751 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717673 4751 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717683 4751 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717707 4751 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717716 4751 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717726 4751 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717737 4751 flags.go:64] FLAG: --pod-cidr="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717746 4751 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717761 4751 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717771 4751 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717780 4751 flags.go:64] FLAG: --pods-per-core="0" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717789 4751 flags.go:64] FLAG: --port="10250" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717799 4751 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717808 4751 flags.go:64] FLAG: --provider-id="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717817 4751 flags.go:64] FLAG: --qos-reserved="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717826 4751 flags.go:64] FLAG: --read-only-port="10255" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717836 4751 flags.go:64] FLAG: --register-node="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717846 4751 flags.go:64] FLAG: --register-schedulable="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717858 4751 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717875 4751 flags.go:64] FLAG: --registry-burst="10" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717884 4751 flags.go:64] FLAG: --registry-qps="5" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717893 4751 flags.go:64] FLAG: --reserved-cpus="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717902 4751 flags.go:64] FLAG: --reserved-memory="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717913 4751 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717922 4751 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717931 4751 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717941 4751 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717950 4751 flags.go:64] FLAG: --runonce="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717958 4751 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717968 4751 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717977 4751 flags.go:64] FLAG: --seccomp-default="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717986 4751 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.717995 4751 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718005 4751 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718015 4751 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718024 4751 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718034 4751 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718044 4751 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718053 4751 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718062 4751 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718072 4751 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718083 4751 flags.go:64] FLAG: --system-cgroups="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718092 4751 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718107 4751 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718115 4751 flags.go:64] FLAG: --tls-cert-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718124 4751 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718138 4751 flags.go:64] FLAG: --tls-min-version="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718147 4751 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718156 4751 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718165 4751 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718174 4751 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718183 4751 flags.go:64] FLAG: --v="2" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718195 4751 flags.go:64] FLAG: --version="false" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718207 4751 flags.go:64] FLAG: --vmodule="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718218 4751 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.718228 4751 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718489 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718501 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718512 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718521 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718531 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718539 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718547 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718558 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718569 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718580 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718589 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718598 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718608 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718619 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718630 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718640 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718651 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718663 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718673 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718682 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718692 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718702 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718711 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718719 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718727 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718735 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718743 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718750 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718758 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718767 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718776 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718786 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718796 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718805 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718818 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718828 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718837 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718847 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718857 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718864 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718875 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718885 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718894 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718903 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718911 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718918 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718926 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718935 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718943 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718951 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718959 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718966 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.718974 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719011 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719021 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719028 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719038 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719049 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719057 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719066 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719074 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719081 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719089 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719096 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719104 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719112 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719120 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719127 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719134 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719142 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.719151 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.719164 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.732078 4751 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.732135 4751 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732302 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732364 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732374 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732386 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732398 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732407 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732417 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732426 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732434 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732442 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732449 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732457 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732465 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732472 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732480 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732488 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732495 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732503 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732511 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732519 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732528 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732536 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732544 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732553 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732561 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732569 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732577 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732586 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732593 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732601 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732609 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732617 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732624 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732632 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732642 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732649 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732657 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732668 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732677 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732685 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732695 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732705 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732714 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732723 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732733 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732744 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732752 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732760 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732769 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732776 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732784 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732791 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732799 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732807 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732815 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732822 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732833 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732841 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732848 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732857 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732864 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732872 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732879 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732887 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732895 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732902 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732910 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732917 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732925 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732933 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.732942 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.732955 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733216 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733230 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733239 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733250 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733261 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733271 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733279 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733289 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733297 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733306 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733313 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733345 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733354 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733362 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733369 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733377 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733386 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733394 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733402 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733410 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733417 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733424 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733432 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733439 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733447 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733457 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733466 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733474 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733483 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733490 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733498 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733505 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733513 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733521 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733529 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733538 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733545 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733553 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733561 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733570 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733577 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733585 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733592 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733600 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733609 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733616 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733624 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733632 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733639 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733647 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733654 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733663 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733671 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733678 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733686 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733694 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733702 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733709 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733717 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733725 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733735 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733744 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733753 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733761 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733770 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733778 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733787 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733796 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733804 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733814 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.733824 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.733836 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.735309 4751 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.744595 4751 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.744727 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747381 4751 server.go:997] "Starting client certificate rotation" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747436 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747664 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-01 22:00:11.754174236 +0000 UTC Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.747771 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.777617 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.780394 4751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.782144 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.799752 4751 log.go:25] "Validated CRI v1 runtime API" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.839605 4751 log.go:25] "Validated CRI v1 image API" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.842299 4751 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.850267 4751 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-21-10-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.850315 4751 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.877514 4751 manager.go:217] Machine: {Timestamp:2026-01-30 21:14:21.87360574 +0000 UTC m=+0.619428459 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd BootID:1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f0:b1:31 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f0:b1:31 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b2:de:88 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bd:80:e7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b1:a5:02 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a2:6f:a3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f2:12:a0:7f:40:cc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:72:bc:5e:8d:70:6b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.877922 4751 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.878232 4751 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.878713 4751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879071 4751 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879126 4751 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879504 4751 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.879522 4751 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.880435 4751 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.880478 4751 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.881600 4751 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.881775 4751 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.886828 4751 kubelet.go:418] "Attempting to sync node with API server" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.886864 4751 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.886890 4751 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.887028 4751 kubelet.go:324] "Adding apiserver pod source" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.887051 4751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.893919 4751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.893946 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.894033 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.894104 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.894132 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.895886 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.899076 4751 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901398 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901450 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901475 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901489 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901510 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901524 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901536 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901558 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901572 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901586 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901613 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.901627 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.903601 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.904316 4751 server.go:1280] "Started kubelet" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.905819 4751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.906042 4751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.906311 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.908649 4751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.909908 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.910002 4751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911796 4751 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911838 4751 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911797 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:47:55.262998858 +0000 UTC Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.911860 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.911951 4751 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.912009 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="200ms" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.913431 4751 factory.go:55] Registering systemd factory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.913494 4751 factory.go:221] Registration of the systemd container factory successfully Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.913658 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.913795 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919083 4751 factory.go:153] Registering CRI-O factory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919122 4751 factory.go:221] Registration of the crio container factory successfully Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919223 4751 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919252 4751 factory.go:103] Registering Raw factory Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.919275 4751 manager.go:1196] Started watching for new ooms in manager Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.920224 4751 manager.go:319] Starting recovery of all containers Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.921571 4751 server.go:460] "Adding debug handlers to kubelet server" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.927605 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f9eb11091ae73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:14:21.904277107 +0000 UTC m=+0.650099786,LastTimestamp:2026-01-30 21:14:21.904277107 +0000 UTC m=+0.650099786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.935919 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936000 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936026 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936049 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936125 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936144 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936168 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936189 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936211 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936230 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936249 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936270 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936358 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936383 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936403 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936421 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936471 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936488 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936507 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936525 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936547 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936565 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936585 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936604 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936624 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936642 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936665 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936689 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936710 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936730 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936751 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936770 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936830 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936847 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936867 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936887 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936907 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936927 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936953 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936973 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.936992 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937012 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937031 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937050 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937070 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937089 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937108 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937127 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937145 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937164 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937182 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937204 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937228 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937250 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937273 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937294 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937316 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937364 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937382 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937471 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.937493 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938218 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938322 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938411 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938491 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938516 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938580 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938603 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938656 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.938691 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.940919 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941029 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941081 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941113 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941154 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941179 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941204 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941242 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941266 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941301 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941358 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941386 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941417 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941439 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941470 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941494 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941517 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941549 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941573 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941604 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941627 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941654 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941690 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941714 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941751 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941792 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941817 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941849 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941876 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941899 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941944 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.941968 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942002 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942026 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942081 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942122 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942160 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942198 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.942233 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949685 4751 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949793 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949842 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949878 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949910 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949938 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949965 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.949991 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950018 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950045 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950072 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950099 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950130 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950156 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950186 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950246 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950272 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950299 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950360 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950391 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950476 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950505 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950574 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950603 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950685 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950713 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950782 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950813 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950913 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.950982 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951010 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951114 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951140 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951205 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951376 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951413 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951487 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951519 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951590 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951620 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951697 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951728 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951805 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951875 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951906 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.951989 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952073 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952105 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952169 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952200 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952263 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952291 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952409 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952479 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952508 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952573 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952602 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952628 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952693 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952767 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952817 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952884 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952911 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.952990 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953063 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953090 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953149 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953173 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953257 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953280 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953366 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953515 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953549 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953623 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953645 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953703 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953804 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953835 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953897 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953923 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.953995 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954023 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954079 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954102 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954184 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954207 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954265 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954289 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954310 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954414 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954437 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954458 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954539 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954595 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954619 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954641 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954709 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954738 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954807 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954832 4751 reconstruct.go:97] "Volume reconstruction finished" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.954910 4751 reconciler.go:26] "Reconciler: start to sync state" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.957194 4751 manager.go:324] Recovery completed Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.971770 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.973339 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.973871 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.974455 4751 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.974505 4751 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.974620 4751 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.975633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.975680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.975700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:21 crc kubenswrapper[4751]: W0130 21:14:21.976535 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:21 crc kubenswrapper[4751]: E0130 21:14:21.976650 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.978115 4751 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.978149 4751 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 21:14:21 crc kubenswrapper[4751]: I0130 21:14:21.978179 4751 state_mem.go:36] "Initialized new in-memory state store" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.003065 4751 policy_none.go:49] "None policy: Start" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.004027 4751 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.004071 4751 state_mem.go:35] "Initializing new in-memory state store" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.013111 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.065932 4751 manager.go:334] "Starting Device Plugin manager" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066060 4751 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066086 4751 server.go:79] "Starting device plugin registration server" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066772 4751 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.066810 4751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.067250 4751 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.067421 4751 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.067436 4751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.075221 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.075311 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.076746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.076803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.076823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.077038 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.077442 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.077515 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078920 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.078971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079234 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.079377 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081439 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.081960 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.082167 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083778 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.083944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084013 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.084398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086748 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.086846 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.088265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.088350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.088369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.113595 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="400ms" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159068 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159360 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159563 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159617 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159677 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.159902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.166988 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.168877 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.169490 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261239 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261368 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261584 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261606 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261741 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261683 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.261939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262249 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.262400 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.370004 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.372266 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.373042 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.416738 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.426388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.445548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.470680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.475637 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524 WatchSource:0}: Error finding container 36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524: Status 404 returned error can't find the container with id 36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524 Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.477068 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.477638 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c WatchSource:0}: Error finding container c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c: Status 404 returned error can't find the container with id c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.490057 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d WatchSource:0}: Error finding container b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d: Status 404 returned error can't find the container with id b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.498404 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15 WatchSource:0}: Error finding container 5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15: Status 404 returned error can't find the container with id 5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15 Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.508241 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a WatchSource:0}: Error finding container 6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a: Status 404 returned error can't find the container with id 6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.515251 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="800ms" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.759233 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.759386 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.773191 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.774684 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.775141 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.887310 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.887544 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:22 crc kubenswrapper[4751]: W0130 21:14:22.896176 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: E0130 21:14:22.896265 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.906976 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.911990 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:19:28.252979258 +0000 UTC Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.980126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0dac8be882d0e246e6cccfd0c98124e06c22a845c68fe8c125449386f5c3f6c"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.981743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"36b802131419fb71069a2b542e78ddf0b6cce801cce0852720d517be58c4a524"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.983850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a9243fea3bf525aecd14125439d384187fe4b04a1c15fa233f887ba4bd6518a"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.985868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5de1be865ce6d22f44af2d043c618e71fa7e30d9368d721637c5edf29ceb4b15"} Jan 30 21:14:22 crc kubenswrapper[4751]: I0130 21:14:22.987049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b1867bd6b8752103bf3861026128570fd9dfa9dd6a1f4d0f448713a4f78fbe5d"} Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.316900 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="1.6s" Jan 30 21:14:23 crc kubenswrapper[4751]: W0130 21:14:23.386908 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.387030 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.575541 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.577692 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.578255 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.902805 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:14:23 crc kubenswrapper[4751]: E0130 21:14:23.904262 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.907314 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.912397 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:10:12.540771558 +0000 UTC Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.991635 4751 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea" exitCode=0 Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.991756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea"} Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.991836 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.993452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.993482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.993491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.996030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db"} Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.996083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13"} Jan 30 21:14:23 crc kubenswrapper[4751]: I0130 21:14:23.996105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.000898 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" exitCode=0 Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.000950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.001171 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.002317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.002354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.002362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.003551 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8" exitCode=0 Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.003629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.003752 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.004935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.004981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.005001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.006357 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3" exitCode=0 Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.006380 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3"} Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.006503 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.007764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.007815 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.007833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.008737 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.009447 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.009492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.009509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.908234 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:24 crc kubenswrapper[4751]: I0130 21:14:24.912513 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:01:00.386132096 +0000 UTC Jan 30 21:14:24 crc kubenswrapper[4751]: E0130 21:14:24.918933 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="3.2s" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.014417 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.014890 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.015708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.015744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.015761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020171 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.020274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.024869 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8" exitCode=0 Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.024948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.025014 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.025947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.025990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.026007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.026888 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"efe6b37689f97464405ccee9a22eff435e66be2c6103b5187255056bf0febaec"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.026974 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.027946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.027975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.027985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.031822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.032051 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.032489 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.032610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945"} Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.033619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.033769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.033899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: W0130 21:14:25.091578 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:25 crc kubenswrapper[4751]: E0130 21:14:25.091700 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.179312 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.180619 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:25 crc kubenswrapper[4751]: E0130 21:14:25.181211 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Jan 30 21:14:25 crc kubenswrapper[4751]: W0130 21:14:25.382409 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Jan 30 21:14:25 crc kubenswrapper[4751]: E0130 21:14:25.382491 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Jan 30 21:14:25 crc kubenswrapper[4751]: I0130 21:14:25.912729 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:14:46.730516497 +0000 UTC Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.039850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63"} Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.039938 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.041522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.041557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.041570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.043632 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2" exitCode=0 Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.043741 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.043787 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044431 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2"} Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044496 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044719 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044955 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.044975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.045134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.045311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.046780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:26 crc kubenswrapper[4751]: I0130 21:14:26.913008 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:31:35.323365111 +0000 UTC Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053316 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7187f01f2a4bdab72ec724f553bfce1e954fd9793874021f9c28152b7d33914c"} Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f36ab607a38bfd32d8bfe64da36280f9b5efaad895c6c26880a00b9dd38ce5a4"} Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053434 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053480 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.053433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6ae9a047d02cc4dcd6a27a4561a660059971561db33c72fdaaa10e177e091c8"} Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.054979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.055059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.055084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.802650 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.802884 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.804881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.804938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.804958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:27 crc kubenswrapper[4751]: I0130 21:14:27.913860 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:10:57.825056845 +0000 UTC Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.062773 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ca7bafdd301335a08edb5982410cee5965742f6b772c88c52ae3630214a4b631"} Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.062833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cdeab12e361345755bc4e07dae7c7355ad83d93a67d27e35596c4b817e2e7699"} Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.062915 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.064185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.064254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.064281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.177164 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.381986 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.383806 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:28 crc kubenswrapper[4751]: I0130 21:14:28.914717 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:58:03.824856609 +0000 UTC Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.067540 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.069903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.069966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.069993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.275415 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.275614 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.276923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.276982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.277004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:29 crc kubenswrapper[4751]: I0130 21:14:29.915162 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:55:08.494797264 +0000 UTC Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.141847 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.142086 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.143788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.143875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.143896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.450674 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.450918 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.452529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.452604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.452621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.601578 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.602711 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.602958 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.604369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.604418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.604494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.643140 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.799635 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.807028 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:30 crc kubenswrapper[4751]: I0130 21:14:30.915461 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:00:04.650764547 +0000 UTC Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.073382 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.073466 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.074953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.074954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.075205 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:31 crc kubenswrapper[4751]: I0130 21:14:31.916023 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:07:53.029691276 +0000 UTC Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.076749 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.078143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.078187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.078204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:32 crc kubenswrapper[4751]: E0130 21:14:32.082503 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 21:14:32 crc kubenswrapper[4751]: I0130 21:14:32.916709 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:49:56.708767567 +0000 UTC Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.337975 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.338217 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.340070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.340120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.340139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.344285 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:33 crc kubenswrapper[4751]: I0130 21:14:33.917175 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:37:53.106884682 +0000 UTC Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.082396 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.083473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.083536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.083554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:34 crc kubenswrapper[4751]: I0130 21:14:34.918374 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:36:37.397229511 +0000 UTC Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.259543 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.259631 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:14:35 crc kubenswrapper[4751]: W0130 21:14:35.802771 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.802873 4751 trace.go:236] Trace[414018107]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:14:25.800) (total time: 10001ms): Jan 30 21:14:35 crc kubenswrapper[4751]: Trace[414018107]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:14:35.802) Jan 30 21:14:35 crc kubenswrapper[4751]: Trace[414018107]: [10.001898977s] [10.001898977s] END Jan 30 21:14:35 crc kubenswrapper[4751]: E0130 21:14:35.802899 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.908360 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 21:14:35 crc kubenswrapper[4751]: I0130 21:14:35.919588 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:13:32.257966145 +0000 UTC Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.338569 4751 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.338625 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.505109 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.505178 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.509403 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.509457 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 21:14:36 crc kubenswrapper[4751]: I0130 21:14:36.919972 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:29:08.862525866 +0000 UTC Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.921094 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:19:21.360248585 +0000 UTC Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.990415 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.990753 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.992208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.992262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:37 crc kubenswrapper[4751]: I0130 21:14:37.992274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.029561 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.092762 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.093917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.093972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.093990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.106592 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 21:14:38 crc kubenswrapper[4751]: I0130 21:14:38.921450 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:23:38.659999399 +0000 UTC Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.096239 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.098093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.098160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.098173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:39 crc kubenswrapper[4751]: I0130 21:14:39.922620 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:02:39.977112097 +0000 UTC Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.147432 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.147616 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.148920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.148990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.149018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.151847 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:40 crc kubenswrapper[4751]: I0130 21:14:40.923193 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:06:49.227051537 +0000 UTC Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.100570 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.100640 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.101986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.102071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.102097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.485572 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.488099 4751 trace.go:236] Trace[1353107066]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:14:26.545) (total time: 14942ms): Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1353107066]: ---"Objects listed" error: 14942ms (21:14:41.487) Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1353107066]: [14.942738788s] [14.942738788s] END Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.488205 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.489835 4751 trace.go:236] Trace[1410619491]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 21:14:31.475) (total time: 10014ms): Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1410619491]: ---"Objects listed" error: 10014ms (21:14:41.489) Jan 30 21:14:41 crc kubenswrapper[4751]: Trace[1410619491]: [10.014455076s] [10.014455076s] END Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.489896 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.490981 4751 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.491046 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.492625 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.506203 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526235 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36486->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526302 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36486->192.168.126.11:17697: read: connection reset by peer" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526721 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.526764 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.848408 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.897391 4751 apiserver.go:52] "Watching apiserver" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.902702 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.902945 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903435 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903472 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.903494 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903814 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.903944 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.903588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:41 crc kubenswrapper[4751]: E0130 21:14:41.904155 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906662 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906789 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906827 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.906546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.907161 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.907248 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.907448 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.909317 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.911438 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.913486 4751 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.923697 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:35:22.664207397 +0000 UTC Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.951241 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.961284 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.974782 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.989597 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994605 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994682 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994762 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994782 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994822 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994841 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994860 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994880 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994901 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994937 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.994966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995005 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995066 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995099 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995116 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995134 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995176 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995196 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995219 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995255 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995273 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995294 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995312 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995367 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995439 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995464 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995523 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995542 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995581 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995647 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995665 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995684 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995703 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995728 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995785 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995803 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995845 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995869 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.995889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996059 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996124 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996161 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996220 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996248 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996267 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996288 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996340 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996361 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996384 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996403 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996435 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996467 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996482 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996513 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996529 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996545 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996576 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996609 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996625 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996656 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996687 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996750 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:41 crc kubenswrapper[4751]: I0130 21:14:41.996765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.996780 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.996796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.996812 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997311 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997456 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997750 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997785 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.997895 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:41.998059 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.498036114 +0000 UTC m=+21.243858873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998184 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998483 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998572 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999014 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999048 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999074 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999122 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999154 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999168 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999205 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999296 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.998807 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999908 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999956 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:41.999996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000033 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000072 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000149 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000197 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000236 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000276 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000370 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000419 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000486 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000524 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000598 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000638 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000674 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000782 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000954 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.000992 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001027 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001060 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001193 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001227 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001301 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001361 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001396 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001430 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001474 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001545 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001582 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001619 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001654 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001695 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001728 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001841 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001846 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.001998 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002092 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002121 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002180 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002206 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002314 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002360 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002386 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002412 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002460 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002446 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002534 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002624 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002649 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002698 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002723 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002793 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002818 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002843 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002891 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002918 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002941 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.002998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003084 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003135 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003196 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003253 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003676 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003716 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003804 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003822 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003835 4751 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003853 4751 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003867 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003880 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003895 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003906 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003919 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003932 4751 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003947 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003961 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003973 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003986 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003947 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004196 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004479 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004690 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005004 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.004679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005388 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.003997 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005942 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.005982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.006638 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.006703 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.506682026 +0000 UTC m=+21.252504675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006863 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007105 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007172 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007176 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007375 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007613 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.007755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008035 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008397 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008444 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008541 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008586 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008940 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.008948 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.009019 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.009834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010231 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010430 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.010545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011087 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011487 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.011756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012004 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012102 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012396 4751 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.012520 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013899 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.013753 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.015682 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.016423 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.516400223 +0000 UTC m=+21.262223072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.006524 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016823 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016847 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016865 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016882 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016899 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016915 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.016930 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028571 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028609 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028627 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.028698 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.52867173 +0000 UTC m=+21.274494379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.031039 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.031511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033003 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033335 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033819 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033945 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.033807 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.035216 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.035704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.035825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.036185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036467 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036488 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036501 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.036549 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:42.536534075 +0000 UTC m=+21.282356724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.036982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037303 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037695 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037838 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.037855 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040464 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040524 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.040857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041082 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041110 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041416 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041463 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041597 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.042360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.041975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.042143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.042607 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.043043 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.043636 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.044046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.045071 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.046253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.046531 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.046713 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.047467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.047650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.048631 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.048701 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049050 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049304 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049059 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049588 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050014 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050040 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.049990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050085 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050533 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.050788 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051691 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051986 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051997 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.051989 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052319 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052583 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052709 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052719 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052630 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052824 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.052858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053197 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053507 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053502 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053621 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.053909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054238 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054459 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054539 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054701 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054724 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054688 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.054812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.055814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.055921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.055949 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056087 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056851 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.056913 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057087 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057135 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057187 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.057440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.059124 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061150 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.061767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062156 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062451 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062535 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062729 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.062898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.063079 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.063791 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.063936 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.065886 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.068640 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.074858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.075410 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.075582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.085986 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.090870 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.090961 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.096807 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.103701 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.106354 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.106915 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63" exitCode=255 Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.106974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63"} Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.117405 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.117893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.118044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.119377 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.119524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120512 4751 scope.go:117] "RemoveContainer" containerID="0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120591 4751 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120609 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120714 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120736 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120746 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120756 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120769 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120781 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120794 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120806 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120818 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120848 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120863 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.120878 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121703 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121944 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121973 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.121994 4751 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122012 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122030 4751 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122048 4751 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122066 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122086 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122104 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122398 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122423 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122442 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122461 4751 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122478 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122496 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122515 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122531 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122565 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122584 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122603 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122619 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122637 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122653 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122670 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122687 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122703 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122720 4751 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122738 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122757 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122777 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122794 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122828 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122847 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122864 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122881 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122898 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122916 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122933 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122950 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122967 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.122985 4751 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123004 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123022 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123042 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123058 4751 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123076 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123095 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123111 4751 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123135 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123153 4751 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123171 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123189 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123206 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123224 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123244 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123263 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123281 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123300 4751 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123318 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123366 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123385 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123403 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123420 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123438 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123456 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123473 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123492 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123511 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123529 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123548 4751 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123565 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123583 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123600 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123619 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123636 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123654 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123671 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123687 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123719 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123735 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123752 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123768 4751 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123787 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123805 4751 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123822 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123839 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123855 4751 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123871 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123887 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123903 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123919 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123937 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123954 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123975 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.123991 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124009 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124025 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124041 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124061 4751 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124081 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124100 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124117 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124134 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124150 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124170 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124186 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124204 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124221 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124238 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124254 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124271 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124318 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124356 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124374 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124390 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124406 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124422 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124439 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124455 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124473 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124491 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124508 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124524 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124543 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124562 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124580 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124596 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124612 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124627 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124643 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124659 4751 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124675 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124692 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124708 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124724 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124740 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124756 4751 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124772 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124787 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124802 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124818 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124835 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124851 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124867 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124882 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124899 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124916 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124932 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124948 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124964 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124980 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.124996 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125014 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125031 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125047 4751 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.125065 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.129899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.141783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.152098 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.163519 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.174756 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.183363 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.229917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.242850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 21:14:42 crc kubenswrapper[4751]: W0130 21:14:42.243814 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a WatchSource:0}: Error finding container 9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a: Status 404 returned error can't find the container with id 9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.250707 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 21:14:42 crc kubenswrapper[4751]: W0130 21:14:42.267906 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8 WatchSource:0}: Error finding container e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8: Status 404 returned error can't find the container with id e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8 Jan 30 21:14:42 crc kubenswrapper[4751]: W0130 21:14:42.268388 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593 WatchSource:0}: Error finding container 86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593: Status 404 returned error can't find the container with id 86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593 Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.528932 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.529190 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.529155711 +0000 UTC m=+22.274978390 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.529421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.529458 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.529492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.529571 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.529625 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.529609362 +0000 UTC m=+22.275432031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530027 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530079 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.530065693 +0000 UTC m=+22.275888362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530258 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530293 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530312 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.530419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.530402011 +0000 UTC m=+22.276224690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.630835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631020 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631322 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631453 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: E0130 21:14:42.631604 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:43.631584988 +0000 UTC m=+22.377407657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:42 crc kubenswrapper[4751]: I0130 21:14:42.923934 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:40:13.769444623 +0000 UTC Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.111456 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.111527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"86be573b52ab01f88c21acd0d53bbf28b3b51e60cad206573fb9d749616b4593"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.113426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.113537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.113551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e17ef3b78fbfdf1726d9df156b0b0b217b2d101e105a28372e308ba3338ce1d8"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.114880 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9e93b94ff985cacc5b3b4759872fec230b8cb0d3eae55347a753e51e8e6dc32a"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.116974 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.118826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d"} Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.119078 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.127049 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.140846 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.153986 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.173689 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.186681 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.208500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.223909 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.236793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.253507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.272111 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.291791 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.308847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.330071 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.346564 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.351445 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.353577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.371290 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.373390 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.386790 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.403500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.421074 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.438350 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.455504 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.476162 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.500563 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.518575 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.537310 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539653 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539753 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.539855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.539919 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.539884192 +0000 UTC m=+24.285706881 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.539989 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540036 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540090 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.540065796 +0000 UTC m=+24.285888475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540143 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540859 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.540890 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.541474 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.541453089 +0000 UTC m=+24.287275768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.541564 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.541550671 +0000 UTC m=+24.287373360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.555047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.580532 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.607455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.622360 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.635896 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:43Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.641226 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641456 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641499 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641519 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.641583 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:45.641565172 +0000 UTC m=+24.387387831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.924173 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:17:37.907727507 +0000 UTC Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.976478 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.976573 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.976628 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.976708 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.976872 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:43 crc kubenswrapper[4751]: E0130 21:14:43.976936 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.982168 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.982842 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.983855 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.984437 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.985379 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.985829 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.986365 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.987194 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.987815 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.988679 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.989126 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.990113 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.990615 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.991076 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.991905 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.992388 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.993226 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.993584 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.994096 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.995302 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.995796 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.996804 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.997229 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.998248 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.998665 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 21:14:43 crc kubenswrapper[4751]: I0130 21:14:43.999250 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.000287 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.000788 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.001695 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.002188 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.003009 4751 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.003110 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.004687 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.005630 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.006058 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.007480 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.008070 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.008949 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.009594 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.010592 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.011063 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.011987 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.012622 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.013532 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.013975 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.014813 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.015315 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.016412 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.016904 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.017685 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.018119 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.018949 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.019821 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.020267 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 21:14:44 crc kubenswrapper[4751]: I0130 21:14:44.924382 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:17:04.707360338 +0000 UTC Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558648 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558712 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.558680592 +0000 UTC m=+28.304503271 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558774 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.558831 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558852 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.558829855 +0000 UTC m=+28.304652544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558863 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558939 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558968 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.558991 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.559051 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.55902895 +0000 UTC m=+28.304851639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.559098 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.55907041 +0000 UTC m=+28.304893089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.659138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659357 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659378 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659391 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.659451 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:49.659436199 +0000 UTC m=+28.405258858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.925251 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:09:17.260773566 +0000 UTC Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.975099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.975112 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:45 crc kubenswrapper[4751]: I0130 21:14:45.975238 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.975389 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.975489 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:45 crc kubenswrapper[4751]: E0130 21:14:45.975576 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.128287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41"} Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.158260 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.177483 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.197545 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.216849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.238026 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.259513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.280016 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.299590 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:46Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:46 crc kubenswrapper[4751]: I0130 21:14:46.925785 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:54:06.420919355 +0000 UTC Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.892925 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.894735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.894797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.894813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.895263 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.903554 4751 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.903791 4751 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.904942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.904985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.905000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.905018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.905041 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.926259 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:50:11.511144631 +0000 UTC Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.935435 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.943990 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.960638 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.966795 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.975344 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.975376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.975448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.975493 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.975658 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.975822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:47 crc kubenswrapper[4751]: E0130 21:14:47.987482 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:47Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:47 crc kubenswrapper[4751]: I0130 21:14:47.992264 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:47Z","lastTransitionTime":"2026-01-30T21:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: E0130 21:14:48.007355 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.012482 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: E0130 21:14:48.030697 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: E0130 21:14:48.030859 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.033142 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.135392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.151787 4751 csr.go:261] certificate signing request csr-9mtls is approved, waiting to be issued Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.170686 4751 csr.go:257] certificate signing request csr-9mtls is issued Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.237542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.340385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.443542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.546357 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.625515 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xdclq"] Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.625865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.627660 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.628211 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.628891 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.643721 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.648494 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.656473 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.668711 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.684534 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-hosts-file\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.684618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcg6\" (UniqueName: \"kubernetes.io/projected/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-kube-api-access-6qcg6\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.687509 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.698991 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.716492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.731627 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.743233 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.751474 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.759694 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:48Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.785742 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-hosts-file\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.785823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcg6\" (UniqueName: \"kubernetes.io/projected/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-kube-api-access-6qcg6\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.785901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-hosts-file\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.813870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcg6\" (UniqueName: \"kubernetes.io/projected/3e4f7eaf-acd6-4cf5-874c-d88c4e479113-kube-api-access-6qcg6\") pod \"node-resolver-xdclq\" (UID: \"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\") " pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.853764 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.927311 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:09:17.0206572 +0000 UTC Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.939528 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xdclq" Jan 30 21:14:48 crc kubenswrapper[4751]: W0130 21:14:48.954869 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e4f7eaf_acd6_4cf5_874c_d88c4e479113.slice/crio-eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467 WatchSource:0}: Error finding container eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467: Status 404 returned error can't find the container with id eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467 Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:48 crc kubenswrapper[4751]: I0130 21:14:48.955864 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:48Z","lastTransitionTime":"2026-01-30T21:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.000201 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.001653 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5sgk2"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.001843 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.002055 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xxc7s"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.002396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003169 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vgfkp"] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003488 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003777 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.003828 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.004285 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.004987 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.005024 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.004991 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.005546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006063 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006132 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006143 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006223 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006077 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006398 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006446 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006649 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006712 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.006764 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.007508 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.008380 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.027674 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.059678 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.060818 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.071905 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087410 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087457 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-socket-dir-parent\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087478 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-conf-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g67rv\" (UniqueName: \"kubernetes.io/projected/ee35b719-afe2-45cf-8726-00c19502f02f-kube-api-access-g67rv\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9acdd0f1-560b-4246-b045-c598c5bbb93d-mcd-auth-proxy-config\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-system-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-k8s-cni-cncf-io\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-bin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087678 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-kubelet\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087729 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-multus\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087777 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087809 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-hostroot\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-system-cni-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-cnibin\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087858 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087876 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9acdd0f1-560b-4246-b045-c598c5bbb93d-proxy-tls\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfrj\" (UniqueName: \"kubernetes.io/projected/9acdd0f1-560b-4246-b045-c598c5bbb93d-kube-api-access-8tfrj\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-etc-kubernetes\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cni-binary-copy\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.087998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9acdd0f1-560b-4246-b045-c598c5bbb93d-rootfs\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088050 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088065 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-daemon-config\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qx87\" (UniqueName: \"kubernetes.io/projected/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-kube-api-access-2qx87\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-netns\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-os-release\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088142 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-multus-certs\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088183 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cnibin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088273 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-os-release\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.088288 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.094619 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.106323 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.116113 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.130675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.135588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdclq" event={"ID":"3e4f7eaf-acd6-4cf5-874c-d88c4e479113","Type":"ContainerStarted","Data":"eb24b3f63c5e086ccba8796160bec62efdd0f3206635c7a349c953c84d64d467"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.154606 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.163629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.172173 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 21:09:48 +0000 UTC, rotation deadline is 2026-11-22 14:03:10.552479534 +0000 UTC Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.172253 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7096h48m21.380230385s for next certificate rotation Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.177346 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cnibin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-os-release\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.188982 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-socket-dir-parent\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189044 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-conf-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g67rv\" (UniqueName: \"kubernetes.io/projected/ee35b719-afe2-45cf-8726-00c19502f02f-kube-api-access-g67rv\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189086 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9acdd0f1-560b-4246-b045-c598c5bbb93d-mcd-auth-proxy-config\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189106 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-system-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189122 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189186 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-os-release\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189226 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-conf-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189151 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-k8s-cni-cncf-io\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189223 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-k8s-cni-cncf-io\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189268 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-bin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-kubelet\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189301 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-socket-dir-parent\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189368 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189373 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-kubelet\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-multus\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-multus\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-system-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189487 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-hostroot\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-var-lib-cni-bin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189509 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-system-cni-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189534 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-system-cni-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-cnibin\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189600 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-cni-dir\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-cnibin\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189631 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189603 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-hostroot\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9acdd0f1-560b-4246-b045-c598c5bbb93d-proxy-tls\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfrj\" (UniqueName: \"kubernetes.io/projected/9acdd0f1-560b-4246-b045-c598c5bbb93d-kube-api-access-8tfrj\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189696 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189716 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-etc-kubernetes\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cni-binary-copy\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189826 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189848 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9acdd0f1-560b-4246-b045-c598c5bbb93d-rootfs\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189971 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.189998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-daemon-config\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-etc-kubernetes\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qx87\" (UniqueName: \"kubernetes.io/projected/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-kube-api-access-2qx87\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-netns\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190131 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-os-release\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9acdd0f1-560b-4246-b045-c598c5bbb93d-mcd-auth-proxy-config\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190178 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee35b719-afe2-45cf-8726-00c19502f02f-os-release\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-multus-certs\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9acdd0f1-560b-4246-b045-c598c5bbb93d-rootfs\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-netns\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190244 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-host-run-multus-certs\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190247 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190337 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190497 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ee35b719-afe2-45cf-8726-00c19502f02f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190533 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190643 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190693 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190882 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cni-binary-copy\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.190943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-cnibin\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.191410 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-multus-daemon-config\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.195774 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.196481 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9acdd0f1-560b-4246-b045-c598c5bbb93d-proxy-tls\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.199603 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.212383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g67rv\" (UniqueName: \"kubernetes.io/projected/ee35b719-afe2-45cf-8726-00c19502f02f-kube-api-access-g67rv\") pod \"multus-additional-cni-plugins-xxc7s\" (UID: \"ee35b719-afe2-45cf-8726-00c19502f02f\") " pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.216462 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qx87\" (UniqueName: \"kubernetes.io/projected/bcecdc4b-6607-4e4e-a9b5-49b85c030d21-kube-api-access-2qx87\") pod \"multus-5sgk2\" (UID: \"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\") " pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.217147 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfrj\" (UniqueName: \"kubernetes.io/projected/9acdd0f1-560b-4246-b045-c598c5bbb93d-kube-api-access-8tfrj\") pod \"machine-config-daemon-vgfkp\" (UID: \"9acdd0f1-560b-4246-b045-c598c5bbb93d\") " pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.220412 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.222305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"ovnkube-node-n8bjd\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.243775 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.266388 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.267994 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.297635 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.325240 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5sgk2" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.326634 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.335491 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.336256 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcecdc4b_6607_4e4e_a9b5_49b85c030d21.slice/crio-f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f WatchSource:0}: Error finding container f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f: Status 404 returned error can't find the container with id f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.344964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.355105 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9eb477_4a6d_4f9c_ba41_5b79f5779ffb.slice/crio-58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0 WatchSource:0}: Error finding container 58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0: Status 404 returned error can't find the container with id 58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0 Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.359347 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9acdd0f1_560b_4246_b045_c598c5bbb93d.slice/crio-dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424 WatchSource:0}: Error finding container dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424: Status 404 returned error can't find the container with id dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424 Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.362593 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.371856 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.379715 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.392103 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: W0130 21:14:49.400069 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee35b719_afe2_45cf_8726_00c19502f02f.slice/crio-3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d WatchSource:0}: Error finding container 3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d: Status 404 returned error can't find the container with id 3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.406315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.419481 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.432634 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.449405 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.461559 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.471837 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.476122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.577922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.593959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.594141 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.594192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.594236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594415 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594448 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594464 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.594534 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.594503929 +0000 UTC m=+36.340326578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595449 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595537 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.595510833 +0000 UTC m=+36.341333482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595530 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595613 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.595604785 +0000 UTC m=+36.341427424 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.595651 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.595622275 +0000 UTC m=+36.341444944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680087 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.680111 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.695841 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696061 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696105 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696121 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.696188 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:14:57.696170748 +0000 UTC m=+36.441993407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.782615 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885448 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.885459 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.928156 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:46:45.464069456 +0000 UTC Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.975705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.975875 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.976376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.976477 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.976598 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:49 crc kubenswrapper[4751]: E0130 21:14:49.976688 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:49 crc kubenswrapper[4751]: I0130 21:14:49.988382 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:49Z","lastTransitionTime":"2026-01-30T21:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.090591 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.145778 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a" exitCode=0 Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.145899 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.145986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerStarted","Data":"3bcc38e3386dcff5d6367437a7d2019e5c6ebbb9b1f79e0df81b2cef34d1b95d"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.148605 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" exitCode=0 Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.148686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.148751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.151900 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.151945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"f52573c687f429b709a9e73424fbcb1cd16f1b9d9776aa0d7c54db2f686c9d7f"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.154598 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xdclq" event={"ID":"3e4f7eaf-acd6-4cf5-874c-d88c4e479113","Type":"ContainerStarted","Data":"63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.156800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.156840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.156854 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"dcdd2681f89d24331613a907d6b693b2a900e2871cb876bc96ed6eafef9f6424"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.169287 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.190947 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.193841 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.208455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.228090 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.238860 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.252064 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.264438 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.288553 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.296321 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.312973 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.325103 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.343541 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.362068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.372300 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.393540 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.398233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.404444 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.418913 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.430810 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.441315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.453587 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.465136 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.476550 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.487481 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.500318 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.501648 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.510427 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.527826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.537715 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.605200 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.708183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.811628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.811983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.811997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.812011 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.812025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.914126 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:50Z","lastTransitionTime":"2026-01-30T21:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:50 crc kubenswrapper[4751]: I0130 21:14:50.931384 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:53:44.585202264 +0000 UTC Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.015958 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.118751 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.162072 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d" exitCode=0 Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.162150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.167643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.178157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.193713 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.209867 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221467 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.221563 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.222503 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.237502 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.256845 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.266053 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-h48zj"] Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.267520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.269546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.270456 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.270526 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.270476 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.276285 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.301836 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.309675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ca91c5-bd9e-486b-943d-8123e2f6e84c-host\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.309715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35ca91c5-bd9e-486b-943d-8123e2f6e84c-serviceca\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.309805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9j8\" (UniqueName: \"kubernetes.io/projected/35ca91c5-bd9e-486b-943d-8123e2f6e84c-kube-api-access-wx9j8\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.312236 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.322754 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.323390 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.333756 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.346846 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.356428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.367475 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.378014 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.389368 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.400111 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410475 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ca91c5-bd9e-486b-943d-8123e2f6e84c-host\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410514 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35ca91c5-bd9e-486b-943d-8123e2f6e84c-serviceca\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9j8\" (UniqueName: \"kubernetes.io/projected/35ca91c5-bd9e-486b-943d-8123e2f6e84c-kube-api-access-wx9j8\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.410597 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35ca91c5-bd9e-486b-943d-8123e2f6e84c-host\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.411587 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35ca91c5-bd9e-486b-943d-8123e2f6e84c-serviceca\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.412574 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.423321 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.425842 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.431671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx9j8\" (UniqueName: \"kubernetes.io/projected/35ca91c5-bd9e-486b-943d-8123e2f6e84c-kube-api-access-wx9j8\") pod \"node-ca-h48zj\" (UID: \"35ca91c5-bd9e-486b-943d-8123e2f6e84c\") " pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.445856 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.457895 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.468025 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.480476 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.494686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.504461 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.517663 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.529184 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.531835 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.595130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h48zj" Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.610661 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ca91c5_bd9e_486b_943d_8123e2f6e84c.slice/crio-72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e WatchSource:0}: Error finding container 72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e: Status 404 returned error can't find the container with id 72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.632157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.735233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.747593 4751 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.748885 4751 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.748926 4751 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.749637 4751 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: W0130 21:14:51.749785 4751 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.846588 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.931619 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:23:25.723768978 +0000 UTC Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.949296 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:51Z","lastTransitionTime":"2026-01-30T21:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.974728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.974738 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:51 crc kubenswrapper[4751]: E0130 21:14:51.974823 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.974728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:51 crc kubenswrapper[4751]: E0130 21:14:51.974895 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:51 crc kubenswrapper[4751]: E0130 21:14:51.975030 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:51 crc kubenswrapper[4751]: I0130 21:14:51.988124 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.000357 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:51Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.016091 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.031786 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.043826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.051127 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.062493 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.079751 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.089813 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.109060 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.121245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.131432 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.144002 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.153733 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.156956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.175228 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.175448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.177086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h48zj" event={"ID":"35ca91c5-bd9e-486b-943d-8123e2f6e84c","Type":"ContainerStarted","Data":"7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.177142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h48zj" event={"ID":"35ca91c5-bd9e-486b-943d-8123e2f6e84c","Type":"ContainerStarted","Data":"72b5fca5e2ba2ba03213f1a34b4c22f465e7f3ceed5749c155ef5736f1953d1e"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.179277 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d" exitCode=0 Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.179315 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.194397 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.205186 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.231822 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.245089 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.256705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.259101 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.269979 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.283675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.296365 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.317513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.335820 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.349035 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.359681 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.363496 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.379762 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.396174 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.407847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.417987 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.434990 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.448078 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.457907 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.461497 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.469774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.482958 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.495902 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.510760 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.526868 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.539915 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.551842 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.562685 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.564705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.582308 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:52Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.621432 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.667866 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.770933 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.874362 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.932617 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:43:24.098303272 +0000 UTC Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.937449 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.959689 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:52 crc kubenswrapper[4751]: I0130 21:14:52.977394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:52Z","lastTransitionTime":"2026-01-30T21:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.080144 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.182783 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.189069 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6" exitCode=0 Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.189150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.216499 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.237079 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.251108 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.269023 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.281717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.282700 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.285867 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.296943 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.332654 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.349644 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.366777 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387567 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.387955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.401852 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.417528 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.437986 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.448835 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:53Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490563 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.490602 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.593694 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.696587 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.799361 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.902557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:53Z","lastTransitionTime":"2026-01-30T21:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.932804 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:38:49.766542375 +0000 UTC Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.975622 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:53 crc kubenswrapper[4751]: E0130 21:14:53.975822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.975851 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:53 crc kubenswrapper[4751]: E0130 21:14:53.976004 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:53 crc kubenswrapper[4751]: I0130 21:14:53.975622 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:53 crc kubenswrapper[4751]: E0130 21:14:53.976148 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.005690 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.108647 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.203307 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6" exitCode=0 Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.203389 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.211996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.212138 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.213621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.237971 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.253351 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.268078 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.285254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.301601 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.314822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.319274 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.333257 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.349700 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.364745 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.380955 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.397803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.414015 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.417528 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.430662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.442642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:54Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.520705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.623834 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.726207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.829936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.829987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.830003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.830021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.830033 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.932654 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:54Z","lastTransitionTime":"2026-01-30T21:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:54 crc kubenswrapper[4751]: I0130 21:14:54.933350 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:44:53.631722045 +0000 UTC Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.036780 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.139711 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.223708 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee35b719-afe2-45cf-8726-00c19502f02f" containerID="483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed" exitCode=0 Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.223760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerDied","Data":"483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.243945 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.247245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.267067 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.284717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.302129 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.317642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.330153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.347187 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.351230 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.366586 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.377105 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.390739 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.402510 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.414724 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.428231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.447179 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:55Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.449956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.450074 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.551911 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.653849 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.755518 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.857918 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.933662 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:26:59.878073919 +0000 UTC Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.960082 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:55Z","lastTransitionTime":"2026-01-30T21:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.975476 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.975511 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:55 crc kubenswrapper[4751]: E0130 21:14:55.975633 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:55 crc kubenswrapper[4751]: I0130 21:14:55.975679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:55 crc kubenswrapper[4751]: E0130 21:14:55.975757 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:55 crc kubenswrapper[4751]: E0130 21:14:55.975857 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.066708 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.169366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.232242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.232730 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.239686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" event={"ID":"ee35b719-afe2-45cf-8726-00c19502f02f","Type":"ContainerStarted","Data":"a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.265585 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.272373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.285100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.325161 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.325522 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.346271 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375348 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.375389 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.397175 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.412071 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.428252 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.453652 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.470719 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.478782 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.482743 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.511403 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.536358 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.551527 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.568665 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.582484 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.587588 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.604922 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.621438 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.645851 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.661627 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.676676 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.685532 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.699394 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.719471 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.738616 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.758820 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.780770 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.787979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.788043 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.796518 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.812263 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.830152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:56Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.890300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.890830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.891134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.891374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.891594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.934744 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:26:15.661475227 +0000 UTC Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:56 crc kubenswrapper[4751]: I0130 21:14:56.994151 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:56Z","lastTransitionTime":"2026-01-30T21:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.097542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.200952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.201247 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.243477 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.244144 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.278011 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.304682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.308264 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.326886 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.354663 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.376915 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.396507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.407970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.408091 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.415966 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.434783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.453801 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.469164 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.487985 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.508242 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.515621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.532423 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.547408 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.579009 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:57Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.620511 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675763 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.675808 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676005 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676032 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676277 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676302 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676409 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.676372682 +0000 UTC m=+52.422195371 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676472 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.676444723 +0000 UTC m=+52.422267412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.676507 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.676493256 +0000 UTC m=+52.422315935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.677424 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.677639 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.67756604 +0000 UTC m=+52.423388749 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.725735 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.776426 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.776862 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.776923 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.776944 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.777021 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:13.776996526 +0000 UTC m=+52.522819215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.828860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.932905 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:57Z","lastTransitionTime":"2026-01-30T21:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.936055 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:18:45.856778915 +0000 UTC Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.975371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.975546 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.976052 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.976155 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:57 crc kubenswrapper[4751]: I0130 21:14:57.977026 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:57 crc kubenswrapper[4751]: E0130 21:14:57.977297 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.041313 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.099809 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.119959 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.124702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.154829 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.160747 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.180314 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.183954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.183989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.184001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.184019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.184031 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.203753 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.207503 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.220015 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:58Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:58 crc kubenswrapper[4751]: E0130 21:14:58.220216 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.222456 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.246469 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.325758 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.428868 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.531570 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.634899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.634992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.635019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.635056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.635095 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.738778 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.842266 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.937379 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:50:12.98371259 +0000 UTC Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:58 crc kubenswrapper[4751]: I0130 21:14:58.944780 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:58Z","lastTransitionTime":"2026-01-30T21:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.048214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.150786 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.249430 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.252921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.252982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.253001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.253024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.253044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.282727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.302913 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.322062 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.343650 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.355938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.356770 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.367686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.387030 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.407089 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.426637 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.446088 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.460951 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.462099 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.495435 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.513446 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.535051 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.549402 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.563554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.563913 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.564069 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.564218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.564412 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.566021 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:14:59Z is after 2025-08-24T17:21:41Z" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.666998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.667113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.770556 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.874201 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.938012 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:00:10.877226964 +0000 UTC Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.974970 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.975024 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.974971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:14:59 crc kubenswrapper[4751]: E0130 21:14:59.975199 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:14:59 crc kubenswrapper[4751]: E0130 21:14:59.975408 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:14:59 crc kubenswrapper[4751]: E0130 21:14:59.975547 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976892 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:14:59 crc kubenswrapper[4751]: I0130 21:14:59.976994 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:14:59Z","lastTransitionTime":"2026-01-30T21:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.079821 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.183786 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.254029 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/0.log" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.257075 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b" exitCode=1 Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.257311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.258228 4751 scope.go:117] "RemoveContainer" containerID="3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.284937 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.286183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.300526 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.317489 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.329847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.340923 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.353345 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.365305 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.382202 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.389095 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.397846 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.413838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.425131 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.440751 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.454187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.463105 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:00Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.491603 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.594459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.594711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.594877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.595060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.595210 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.698883 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.801971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.802084 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.904943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.905074 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:00Z","lastTransitionTime":"2026-01-30T21:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:00 crc kubenswrapper[4751]: I0130 21:15:00.939198 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:19:22.409344606 +0000 UTC Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.007736 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.111552 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.214884 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.265915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/0.log" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.270951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.271128 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.306199 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.318999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.319136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.323491 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.377389 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.398212 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.419658 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.421872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.422925 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.441756 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.457196 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.480724 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.499772 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.519038 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.525726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.525950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.526080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.526204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.526339 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.540565 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.558128 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.577679 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.595769 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629190 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.629228 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.731742 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.834910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.834970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.834985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.835015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.835029 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.881800 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8"] Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.882195 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.884598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.886627 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.899505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.912588 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.923658 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.935119 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.936837 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:01Z","lastTransitionTime":"2026-01-30T21:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.939506 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:43:41.14528966 +0000 UTC Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.951662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.965058 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.975109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:01 crc kubenswrapper[4751]: E0130 21:15:01.975449 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.975154 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:01 crc kubenswrapper[4751]: E0130 21:15:01.975719 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.975109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:01 crc kubenswrapper[4751]: E0130 21:15:01.976194 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.976916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:01Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986964 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.986987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:01 crc kubenswrapper[4751]: I0130 21:15:01.987034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccttm\" (UniqueName: \"kubernetes.io/projected/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-kube-api-access-ccttm\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.006849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.021542 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.030959 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.038819 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.042462 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.054593 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.071187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087671 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.087729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccttm\" (UniqueName: \"kubernetes.io/projected/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-kube-api-access-ccttm\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.088289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.088926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.089254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.093480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.102686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.111254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccttm\" (UniqueName: \"kubernetes.io/projected/f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4-kube-api-access-ccttm\") pod \"ovnkube-control-plane-749d76644c-8h2z8\" (UID: \"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.120068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.131555 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141174 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.141665 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.150918 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.159674 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.169778 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.179254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.187160 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.197990 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.203713 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: W0130 21:15:02.214159 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ffb2e7_2c69_43bb_84cd_821c1dffd7d4.slice/crio-67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016 WatchSource:0}: Error finding container 67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016: Status 404 returned error can't find the container with id 67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016 Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.215925 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.225231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.236513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243360 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.243384 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.247808 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.259899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.275361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" event={"ID":"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4","Type":"ContainerStarted","Data":"67ece8c9b14d9ad1f82c2b49fc44ec09ae56fca68fc40f980def0a57c36b5016"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.277370 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.277854 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/0.log" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.282768 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" exitCode=1 Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.282798 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.282828 4751 scope.go:117] "RemoveContainer" containerID="3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.283511 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:02 crc kubenswrapper[4751]: E0130 21:15:02.283652 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.297412 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.314173 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.325252 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.336243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.345236 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.347358 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.361869 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.374112 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.386679 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.399270 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.412312 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.424098 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.437967 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.447405 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.452819 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.463828 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.477437 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:02Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550341 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.550392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.653304 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.756433 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.858975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.859080 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.939928 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:56:20.253169735 +0000 UTC Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:02 crc kubenswrapper[4751]: I0130 21:15:02.961770 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:02Z","lastTransitionTime":"2026-01-30T21:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.012222 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c477w"] Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.012920 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.013017 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.032505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.068391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.069375 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.082970 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.098813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zds78\" (UniqueName: \"kubernetes.io/projected/3c30a687-0b58-4a63-b9e3-3a3624676358-kube-api-access-zds78\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.099140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.114738 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.136436 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.154266 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.174855 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.177541 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.200588 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.201108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.201252 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zds78\" (UniqueName: \"kubernetes.io/projected/3c30a687-0b58-4a63-b9e3-3a3624676358-kube-api-access-zds78\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.201391 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.201496 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:03.701472238 +0000 UTC m=+42.447294887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.220704 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.230043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zds78\" (UniqueName: \"kubernetes.io/projected/3c30a687-0b58-4a63-b9e3-3a3624676358-kube-api-access-zds78\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.235413 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.255195 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.268818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.280660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.288538 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.292493 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.302215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" event={"ID":"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4","Type":"ContainerStarted","Data":"df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.302299 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" event={"ID":"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4","Type":"ContainerStarted","Data":"3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.308429 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.327776 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.340100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.351860 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.363293 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.376667 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.383634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.386972 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.404610 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.420203 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.434041 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.450152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.463110 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.478654 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.486932 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.494798 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.513189 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.532541 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.550966 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.563245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.588428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3038bfe2559098cf156655ef3f60468f293ad5f2cfeb15002f8e84a764f1654b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI0130 21:14:58.875209 6139 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 21:14:58.875241 6139 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 21:14:58.875395 6139 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 21:14:58.875399 6139 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 21:14:58.875543 6139 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 21:14:58.875664 6139 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 21:14:58.875730 6139 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 21:14:58.875776 6139 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 21:14:58.875836 6139 factory.go:656] Stopping watch factory\\\\nI0130 21:14:58.875896 6139 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 21:14:58.875948 6139 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 21:14:58.875750 6139 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 21:14:58.876008 6139 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 21:14:58.876020 6139 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:03Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.589253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.691917 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.707415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.707541 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.707594 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:04.70757922 +0000 UTC m=+43.453401869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.794775 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.898370 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:03Z","lastTransitionTime":"2026-01-30T21:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.941014 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:56:10.160385303 +0000 UTC Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.975610 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.975653 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:03 crc kubenswrapper[4751]: I0130 21:15:03.975628 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.975790 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.976011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:03 crc kubenswrapper[4751]: E0130 21:15:03.976159 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.001148 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.104212 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.208159 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.311184 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.413981 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.518446 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.621601 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.718874 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:04 crc kubenswrapper[4751]: E0130 21:15:04.719345 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:04 crc kubenswrapper[4751]: E0130 21:15:04.719531 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:06.719506339 +0000 UTC m=+45.465328998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.724431 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.827700 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.930536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.930966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.931118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.931271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.931439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:04Z","lastTransitionTime":"2026-01-30T21:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.941966 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:48:46.543795961 +0000 UTC Jan 30 21:15:04 crc kubenswrapper[4751]: I0130 21:15:04.975433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:04 crc kubenswrapper[4751]: E0130 21:15:04.975560 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.034271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.034772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.034990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.035232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.035517 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.138475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.242125 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.344750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.345594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.448993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.449116 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.551500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.654946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.757909 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.861145 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.865146 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.866454 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.866737 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.885863 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.906172 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.926922 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.942493 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:03:33.992828629 +0000 UTC Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.948740 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.964242 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:05Z","lastTransitionTime":"2026-01-30T21:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.968231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.974737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.974798 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.974952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.975112 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.975467 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:05 crc kubenswrapper[4751]: E0130 21:15:05.975863 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:05 crc kubenswrapper[4751]: I0130 21:15:05.990283 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:05Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.006106 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.025428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.040849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.067388 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.070492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.087760 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.100547 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.119229 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.134153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.153803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.167871 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:06Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169348 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.169379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272320 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.272621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.375920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.375983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.376001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.376026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.376044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.479412 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.581822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.582515 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.685942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.686058 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.741806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:06 crc kubenswrapper[4751]: E0130 21:15:06.742015 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:06 crc kubenswrapper[4751]: E0130 21:15:06.742089 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:10.742065087 +0000 UTC m=+49.487887776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.789251 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.892302 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.942831 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:48:48.48462215 +0000 UTC Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.975518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:06 crc kubenswrapper[4751]: E0130 21:15:06.975668 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:06 crc kubenswrapper[4751]: I0130 21:15:06.995455 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:06Z","lastTransitionTime":"2026-01-30T21:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.099414 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.202553 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.306214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.409689 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.513422 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.616182 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.719522 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.821983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.822000 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.924654 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:07Z","lastTransitionTime":"2026-01-30T21:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.943378 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:59:42.92398453 +0000 UTC Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.975677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.975744 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:07 crc kubenswrapper[4751]: E0130 21:15:07.975846 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:07 crc kubenswrapper[4751]: I0130 21:15:07.975867 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:07 crc kubenswrapper[4751]: E0130 21:15:07.976023 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:07 crc kubenswrapper[4751]: E0130 21:15:07.976124 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.027437 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.129905 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.232660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.335271 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.381181 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.401548 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.406720 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.426689 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.432565 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.454166 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.458882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.458957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.459019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.459055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.459079 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.477699 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.482993 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.500841 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:08Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.501097 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.503182 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.606820 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.709920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.710848 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.814750 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.917844 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:08Z","lastTransitionTime":"2026-01-30T21:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.944495 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:52:36.886358016 +0000 UTC Jan 30 21:15:08 crc kubenswrapper[4751]: I0130 21:15:08.974948 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:08 crc kubenswrapper[4751]: E0130 21:15:08.975079 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.020321 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.123714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.226420 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.329319 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.432840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.536275 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.639601 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.742137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.845797 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.945048 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:06:58.65643281 +0000 UTC Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.948594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:09Z","lastTransitionTime":"2026-01-30T21:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.974845 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.974910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:09 crc kubenswrapper[4751]: E0130 21:15:09.974991 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:09 crc kubenswrapper[4751]: I0130 21:15:09.974860 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:09 crc kubenswrapper[4751]: E0130 21:15:09.975168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:09 crc kubenswrapper[4751]: E0130 21:15:09.975441 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.052243 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.155267 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.261992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.262018 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.364986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.365001 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.468560 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.571106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.571436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.572250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.572586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.572903 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.676222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.676993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.677010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.677027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.677038 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.781595 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.789146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:10 crc kubenswrapper[4751]: E0130 21:15:10.789354 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:10 crc kubenswrapper[4751]: E0130 21:15:10.789440 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:18.789420345 +0000 UTC m=+57.535243004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.884926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.885078 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.945138 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:51:57.985788634 +0000 UTC Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.975493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:10 crc kubenswrapper[4751]: E0130 21:15:10.975620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:10 crc kubenswrapper[4751]: I0130 21:15:10.988816 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:10Z","lastTransitionTime":"2026-01-30T21:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.092846 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.195159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.195527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.195834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.196056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.196246 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.299912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.403598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.403974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.404110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.404234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.404387 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.507745 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.611211 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.714703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.817786 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.920873 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:11Z","lastTransitionTime":"2026-01-30T21:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.946263 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:14:19.622054116 +0000 UTC Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.974796 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.974867 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:11 crc kubenswrapper[4751]: E0130 21:15:11.974965 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.974992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:11 crc kubenswrapper[4751]: E0130 21:15:11.975048 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:11 crc kubenswrapper[4751]: E0130 21:15:11.975093 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:11 crc kubenswrapper[4751]: I0130 21:15:11.994480 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:11Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.010203 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.022885 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.031059 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.044361 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.057904 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.072798 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.096277 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.116946 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126384 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.126466 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.131535 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.149916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.169761 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.191841 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.208787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.228941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.228979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.228989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.229009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.229023 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.238315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.257092 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.272781 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:12Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.331970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.332139 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.435474 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.537984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.538073 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.641924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.744569 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.847144 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.947264 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:26:04.098821301 +0000 UTC Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949371 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.949383 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:12Z","lastTransitionTime":"2026-01-30T21:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:12 crc kubenswrapper[4751]: I0130 21:15:12.975099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:12 crc kubenswrapper[4751]: E0130 21:15:12.975274 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.051680 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.155304 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.258683 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361360 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.361378 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.464598 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.567459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.567977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.568183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.568408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.568611 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.687975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.688928 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.721679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.721788 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.721770484 +0000 UTC m=+84.467593133 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.721894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722028 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722075 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.722065181 +0000 UTC m=+84.467887830 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.722219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.722273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722348 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722397 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.722387568 +0000 UTC m=+84.468210217 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722619 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722701 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722722 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.722825 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.722799698 +0000 UTC m=+84.468622347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.791948 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.823708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.823912 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.823960 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.823980 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.824081 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:45.824055987 +0000 UTC m=+84.569878676 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.894664 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.948395 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:37:41.417987471 +0000 UTC Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.974765 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.974767 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.974847 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.975010 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.975099 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:13 crc kubenswrapper[4751]: E0130 21:15:13.975167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:13 crc kubenswrapper[4751]: I0130 21:15:13.997379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:13Z","lastTransitionTime":"2026-01-30T21:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.100713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.203878 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.306991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.307113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.409863 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512283 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.512352 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615074 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.615137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.719854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.823394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.926307 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:14Z","lastTransitionTime":"2026-01-30T21:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.949009 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:22:20.490884685 +0000 UTC Jan 30 21:15:14 crc kubenswrapper[4751]: I0130 21:15:14.975482 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:14 crc kubenswrapper[4751]: E0130 21:15:14.975678 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.029210 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.132645 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.234927 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.337973 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.441883 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.544397 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.646621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.749621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.853138 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.949344 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:23:46.664502312 +0000 UTC Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.955847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.956644 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:15Z","lastTransitionTime":"2026-01-30T21:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.975126 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.975185 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:15 crc kubenswrapper[4751]: E0130 21:15:15.975290 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:15 crc kubenswrapper[4751]: E0130 21:15:15.975600 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:15 crc kubenswrapper[4751]: I0130 21:15:15.975960 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:15 crc kubenswrapper[4751]: E0130 21:15:15.976286 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.059303 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.162714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266078 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.266120 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.369296 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.472502 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.575589 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.678640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.781662 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.884422 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.949583 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:45:34.10408856 +0000 UTC Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.975532 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:16 crc kubenswrapper[4751]: E0130 21:15:16.975721 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:16 crc kubenswrapper[4751]: I0130 21:15:16.987555 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:16Z","lastTransitionTime":"2026-01-30T21:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.090955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.194411 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.297639 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.400748 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.504012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.504488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.504961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.505197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.505438 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.608636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.609617 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.713763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.714909 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.812758 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.818682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.828394 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.847968 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.865757 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.881836 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.899097 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.917802 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.922439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:17Z","lastTransitionTime":"2026-01-30T21:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.937109 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.949752 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:17:00.142597209 +0000 UTC Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.958320 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.975613 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.975727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.975802 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.976267 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:17 crc kubenswrapper[4751]: E0130 21:15:17.976774 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:17 crc kubenswrapper[4751]: E0130 21:15:17.976522 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:17 crc kubenswrapper[4751]: I0130 21:15:17.976909 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:17 crc kubenswrapper[4751]: E0130 21:15:17.976924 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.001046 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:17Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.023187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024758 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.024822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.040496 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.062385 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.083573 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.100987 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.113154 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.124710 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.127938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.230960 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.336545 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.360961 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.366242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.367241 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.386151 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.410609 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.435301 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.439822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.453245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.478582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.496659 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.511510 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.525200 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.542630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.543944 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.561004 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.575110 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.590069 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.599496 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.612463 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.622566 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.635193 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.644900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.650508 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.709848 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.722673 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.726879 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.744288 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.747641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.763398 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.767860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.779596 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.783369 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.795370 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.795597 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797351 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.797400 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.874024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.874182 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.874258 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:15:34.874237641 +0000 UTC m=+73.620060290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.900859 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:18Z","lastTransitionTime":"2026-01-30T21:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.950230 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:39:10.853856798 +0000 UTC Jan 30 21:15:18 crc kubenswrapper[4751]: I0130 21:15:18.974896 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:18 crc kubenswrapper[4751]: E0130 21:15:18.975163 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.004872 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.108823 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.211860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.320947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.321110 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.373664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.374850 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/1.log" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.379756 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" exitCode=1 Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.379812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.379870 4751 scope.go:117] "RemoveContainer" containerID="12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.381034 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.381308 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.409137 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.424964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.425100 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.429488 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.446617 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.461669 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.481206 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.497859 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.511170 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.527180 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.529719 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.546083 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.562140 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.580118 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.598157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.630391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.658141 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12246884e8bd1ec86d7586a61ba35d0e126d9336288f833aec37c4083d33e70f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:02Z\\\",\\\"message\\\":\\\"enshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0130 21:15:02.132224 6280 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 7.181037ms\\\\nI0130 21:15:02.132235 6280 services_controller.go:356] Processing sync for service openshift-kube-scheduler/scheduler for network=default\\\\nI0130 21:15:02.132237 6280 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}\\\\nI0130 21:15:02.132271 6280 services_controller.go:360] Finished syncing service api on namespace openshift-apiserver for network=default : 4.894454ms\\\\nI0130 21:15:02.132289 6280 services_controller.go:356] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics for network=default\\\\nI0130 21:15:02.132281 6280 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-5sgk2\\\\nI0130 21:15:02.132313 6280 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-5sgk2\\\\nF0130 21:15:02.132347 6280 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.674030 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.686983 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.701032 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.712979 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:19Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.732924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.834914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.834977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.834994 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.835020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.835041 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938405 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.938455 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:19Z","lastTransitionTime":"2026-01-30T21:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.951147 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:23:57.127728638 +0000 UTC Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.975596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.975679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.975744 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.975888 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:19 crc kubenswrapper[4751]: I0130 21:15:19.975684 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:19 crc kubenswrapper[4751]: E0130 21:15:19.976142 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.041718 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.144961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.248612 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351189 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.351307 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.385947 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.391428 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:20 crc kubenswrapper[4751]: E0130 21:15:20.391775 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.410051 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.430514 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.455319 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.466153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.485189 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.502485 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.520248 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.540854 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.557917 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.560437 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.580429 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.596009 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.611830 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.634922 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.654264 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.661437 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.678166 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.698948 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.717821 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.737563 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:20Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.764381 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.867965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.868098 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.952075 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:52:19.698946166 +0000 UTC Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.971286 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:20Z","lastTransitionTime":"2026-01-30T21:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:20 crc kubenswrapper[4751]: I0130 21:15:20.975126 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:20 crc kubenswrapper[4751]: E0130 21:15:20.975507 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.074965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.075955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.178413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.178731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.178896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.179073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.179244 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.282798 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.386459 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.488858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.489507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.592861 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.695359 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.797860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.900702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:21Z","lastTransitionTime":"2026-01-30T21:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.952250 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:46:01.579456107 +0000 UTC Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.974862 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.974894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:21 crc kubenswrapper[4751]: E0130 21:15:21.975082 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:21 crc kubenswrapper[4751]: E0130 21:15:21.975233 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.975488 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:21 crc kubenswrapper[4751]: E0130 21:15:21.975771 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:21 crc kubenswrapper[4751]: I0130 21:15:21.995638 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:21Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.003779 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.020894 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.039056 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.063458 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.086132 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.105655 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.108615 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.127774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.145658 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.168972 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.200918 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.208741 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.216738 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.232029 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.248787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.268533 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.290519 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312361 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.312808 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.328662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:22Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.415957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.416103 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.518974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.519077 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.621677 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.725250 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.828197 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.930811 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:22Z","lastTransitionTime":"2026-01-30T21:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.953281 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:53:47.339332728 +0000 UTC Jan 30 21:15:22 crc kubenswrapper[4751]: I0130 21:15:22.975730 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:22 crc kubenswrapper[4751]: E0130 21:15:22.975970 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.034407 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137320 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.137404 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.240957 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.343887 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.446791 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.550500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.653922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757346 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.757429 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.860798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.861009 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.954256 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:32:23.18169999 +0000 UTC Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.964627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:23Z","lastTransitionTime":"2026-01-30T21:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.974895 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.974968 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:23 crc kubenswrapper[4751]: I0130 21:15:23.975222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:23 crc kubenswrapper[4751]: E0130 21:15:23.975454 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:23 crc kubenswrapper[4751]: E0130 21:15:23.975588 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:23 crc kubenswrapper[4751]: E0130 21:15:23.975665 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.067852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.068653 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.171867 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.275994 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.380760 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.482910 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.585960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.586015 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.689924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.792699 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.896424 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:24Z","lastTransitionTime":"2026-01-30T21:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.955179 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:31:28.865214967 +0000 UTC Jan 30 21:15:24 crc kubenswrapper[4751]: I0130 21:15:24.975690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:24 crc kubenswrapper[4751]: E0130 21:15:24.975941 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.000423 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.105777 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.208825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.209773 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.312682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.415631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.518948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.518997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.519016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.519040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.519057 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.621569 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.724581 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.827649 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.930868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.931552 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:25Z","lastTransitionTime":"2026-01-30T21:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.955931 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:31:13.067896086 +0000 UTC Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.975282 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.975416 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:25 crc kubenswrapper[4751]: E0130 21:15:25.975502 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:25 crc kubenswrapper[4751]: E0130 21:15:25.975709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:25 crc kubenswrapper[4751]: I0130 21:15:25.975933 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:25 crc kubenswrapper[4751]: E0130 21:15:25.976038 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.034130 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.136970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.137059 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.239688 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.341889 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444349 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.444359 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.547421 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.650159 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.752867 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.854942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.854978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.854989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.855001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.855010 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.956124 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:27:15.261660972 +0000 UTC Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.957973 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.958113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:26Z","lastTransitionTime":"2026-01-30T21:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:26 crc kubenswrapper[4751]: I0130 21:15:26.974986 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:26 crc kubenswrapper[4751]: E0130 21:15:26.975147 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.060469 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.163824 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265853 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.265912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.368625 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.471110 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.573576 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.675815 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.778586 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.881884 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.956628 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:49:38.851672945 +0000 UTC Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.976311 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:27 crc kubenswrapper[4751]: E0130 21:15:27.976462 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.976561 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.976619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:27 crc kubenswrapper[4751]: E0130 21:15:27.976731 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:27 crc kubenswrapper[4751]: E0130 21:15:27.976937 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:27 crc kubenswrapper[4751]: I0130 21:15:27.984097 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:27Z","lastTransitionTime":"2026-01-30T21:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.086547 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.189557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.293365 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.396753 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.499957 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.602906 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.705726 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.808501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.809801 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.825860 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.829902 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.847722 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.851838 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.870385 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.873937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.890918 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.895089 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.908296 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:28Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.908556 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.910522 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:28Z","lastTransitionTime":"2026-01-30T21:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.957116 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:56:48.089561854 +0000 UTC Jan 30 21:15:28 crc kubenswrapper[4751]: I0130 21:15:28.975523 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:28 crc kubenswrapper[4751]: E0130 21:15:28.975708 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.013310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.115733 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.218525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.321572 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.424415 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.526936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629078 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.629115 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.731972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.732047 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.834289 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.936917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.936980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.936997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.937021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.937038 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:29Z","lastTransitionTime":"2026-01-30T21:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.957422 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:57:42.66713939 +0000 UTC Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.974950 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.974993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:29 crc kubenswrapper[4751]: I0130 21:15:29.975013 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:29 crc kubenswrapper[4751]: E0130 21:15:29.975045 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:29 crc kubenswrapper[4751]: E0130 21:15:29.975109 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:29 crc kubenswrapper[4751]: E0130 21:15:29.975224 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.039831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.142472 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.244732 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.346986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.448676 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.551592 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.654875 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.757999 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.859987 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.957570 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:09:19.543757078 +0000 UTC Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.962507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:30Z","lastTransitionTime":"2026-01-30T21:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:30 crc kubenswrapper[4751]: I0130 21:15:30.975594 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:30 crc kubenswrapper[4751]: E0130 21:15:30.975770 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.065804 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.168612 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271240 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.271273 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.374135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.476608 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.579831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.682733 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.784946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.887444 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.958497 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:54:01.221782074 +0000 UTC Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.974876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.974945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.974954 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:31 crc kubenswrapper[4751]: E0130 21:15:31.975066 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:31 crc kubenswrapper[4751]: E0130 21:15:31.975119 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:31 crc kubenswrapper[4751]: E0130 21:15:31.975173 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.990530 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:31Z","lastTransitionTime":"2026-01-30T21:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:31 crc kubenswrapper[4751]: I0130 21:15:31.991974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:31Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.009156 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.026803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.043685 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.061574 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.076067 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.089153 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.092214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.118430 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.129365 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.139483 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.151994 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.165554 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.178675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.190244 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.193949 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.207991 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.221124 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.235787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:32Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.296461 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.398125 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.500707 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.602962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.603079 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.705999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.706139 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.809202 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.911640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:32Z","lastTransitionTime":"2026-01-30T21:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.959139 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:08:07.500903077 +0000 UTC Jan 30 21:15:32 crc kubenswrapper[4751]: I0130 21:15:32.975536 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:32 crc kubenswrapper[4751]: E0130 21:15:32.975936 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.013822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.116130 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.218993 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.321735 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.424740 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.526634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.628703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.731475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.834406 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.936859 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:33Z","lastTransitionTime":"2026-01-30T21:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.959398 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:14:16.289249103 +0000 UTC Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.975615 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.975637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:33 crc kubenswrapper[4751]: E0130 21:15:33.975718 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:33 crc kubenswrapper[4751]: I0130 21:15:33.975779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:33 crc kubenswrapper[4751]: E0130 21:15:33.975866 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:33 crc kubenswrapper[4751]: E0130 21:15:33.975971 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.039973 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.142889 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.245469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.245737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.245924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.246073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.246230 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.349213 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.451157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.553573 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656343 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.656373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.758207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.859951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.860896 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.960438 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:39:19.892805281 +0000 UTC Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.963146 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:34Z","lastTransitionTime":"2026-01-30T21:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.969398 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.969690 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.969895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:06.969867786 +0000 UTC m=+105.715690475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.975065 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.975163 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:34 crc kubenswrapper[4751]: I0130 21:15:34.976722 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:34 crc kubenswrapper[4751]: E0130 21:15:34.977094 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.066985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.067106 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.170073 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.272545 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.374945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.375040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.476969 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.578919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.681503 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.786711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.786924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.786998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.787075 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.787207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.894739 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.961053 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:16:47.156281457 +0000 UTC Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.975475 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.975534 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.975490 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:35 crc kubenswrapper[4751]: E0130 21:15:35.975597 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:35 crc kubenswrapper[4751]: E0130 21:15:35.975710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:35 crc kubenswrapper[4751]: E0130 21:15:35.975797 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:35 crc kubenswrapper[4751]: I0130 21:15:35.997244 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:35Z","lastTransitionTime":"2026-01-30T21:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.100660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.203944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.204094 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.307866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.308040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.410916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.411085 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443281 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/0.log" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443362 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" containerID="a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c" exitCode=1 Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerDied","Data":"a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.443919 4751 scope.go:117] "RemoveContainer" containerID="a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.465910 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.487479 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.506743 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.514194 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.521672 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.538421 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.554916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.571047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.590154 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.601659 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617107 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.617168 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.620590 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.635315 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.647617 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.660204 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.670645 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.681997 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.698264 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.713597 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:36Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.719288 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.822719 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.927430 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:36Z","lastTransitionTime":"2026-01-30T21:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.961953 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:26:34.178614112 +0000 UTC Jan 30 21:15:36 crc kubenswrapper[4751]: I0130 21:15:36.975386 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:36 crc kubenswrapper[4751]: E0130 21:15:36.975566 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.030139 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.132901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.132951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.132967 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.133002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.133037 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.235807 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.339256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.441954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.442049 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.448271 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/0.log" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.448397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.464500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.493082 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.512532 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.535725 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.544475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.551492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.569243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.585263 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.601054 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.615888 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.637112 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.647310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.651441 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.662550 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.673789 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.690566 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.702830 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.718133 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.732968 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:37Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.750840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.853567 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.956730 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:37Z","lastTransitionTime":"2026-01-30T21:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.962524 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:05:32.166042055 +0000 UTC Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.975774 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.975827 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:37 crc kubenswrapper[4751]: I0130 21:15:37.975781 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:37 crc kubenswrapper[4751]: E0130 21:15:37.975985 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:37 crc kubenswrapper[4751]: E0130 21:15:37.976113 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:37 crc kubenswrapper[4751]: E0130 21:15:37.976306 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.059679 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.162807 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.265177 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.367886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.367968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.367989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.368023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.368047 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.470291 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.573844 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.677198 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.779862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.882355 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.963232 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:31:46.527719174 +0000 UTC Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.975692 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:38 crc kubenswrapper[4751]: E0130 21:15:38.975933 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:38 crc kubenswrapper[4751]: I0130 21:15:38.985188 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:38Z","lastTransitionTime":"2026-01-30T21:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.088785 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.192174 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.224800 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.245729 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.250951 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.277007 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.282691 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.303355 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.308136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.326375 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.331759 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.353990 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:39Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.354262 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.356582 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.459950 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.562822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.665575 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.768998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.769140 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.872742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.873618 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.963676 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:33:35.228411435 +0000 UTC Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.975075 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.975295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.975573 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.975751 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.975902 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:39 crc kubenswrapper[4751]: E0130 21:15:39.976059 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:39 crc kubenswrapper[4751]: I0130 21:15:39.977612 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:39Z","lastTransitionTime":"2026-01-30T21:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079928 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.079986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.183374 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.285594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.388489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.491937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.594915 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697940 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.697961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.801415 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.904831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:40Z","lastTransitionTime":"2026-01-30T21:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.964478 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:30:51.798560439 +0000 UTC Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.974884 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:40 crc kubenswrapper[4751]: E0130 21:15:40.975383 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:40 crc kubenswrapper[4751]: I0130 21:15:40.996774 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.007781 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.110893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.110948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.110974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.111006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.111031 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.213631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.316816 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.419836 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.523712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.625937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.625999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.626015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.626033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.626044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.729497 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.831911 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.935137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:41Z","lastTransitionTime":"2026-01-30T21:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.965132 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:15:57.752718565 +0000 UTC Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.975636 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.975674 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:41 crc kubenswrapper[4751]: E0130 21:15:41.975773 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.975850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:41 crc kubenswrapper[4751]: E0130 21:15:41.975928 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:41 crc kubenswrapper[4751]: E0130 21:15:41.975993 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:41 crc kubenswrapper[4751]: I0130 21:15:41.990750 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:41Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.006964 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.032855 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.038641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.052404 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.071921 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.102158 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.123095 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.140373 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.143936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.144095 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.159553 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.179608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.198752 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.220043 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.236054 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.248593 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.254269 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.292170 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f993e56-5c22-4c90-970f-15faa6ea54b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f36ab607a38bfd32d8bfe64da36280f9b5efaad895c6c26880a00b9dd38ce5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7187f01f2a4bdab72ec724f553bfce1e954fd9793874021f9c28152b7d33914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdeab12e361345755bc4e07dae7c7355ad83d93a67d27e35596c4b817e2e7699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7bafdd301335a08edb5982410cee5965742f6b772c88c52ae3630214a4b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ae9a047d02cc4dcd6a27a4561a660059971561db33c72fdaaa10e177e091c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.319276 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.340505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.351996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352069 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.352110 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.367608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:42Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.454499 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.558985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.559115 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.662394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.765460 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.868276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.869390 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.965917 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:12:53.187991201 +0000 UTC Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.974211 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:42Z","lastTransitionTime":"2026-01-30T21:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:42 crc kubenswrapper[4751]: I0130 21:15:42.975194 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:42 crc kubenswrapper[4751]: E0130 21:15:42.975439 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.077980 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.180276 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.283141 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386121 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.386181 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.489299 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.592667 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.695953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.696064 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798251 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.798268 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.901294 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:43Z","lastTransitionTime":"2026-01-30T21:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.967032 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:29:01.757414028 +0000 UTC Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.974778 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.974823 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:43 crc kubenswrapper[4751]: I0130 21:15:43.974871 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:43 crc kubenswrapper[4751]: E0130 21:15:43.974979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:43 crc kubenswrapper[4751]: E0130 21:15:43.975168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:43 crc kubenswrapper[4751]: E0130 21:15:43.975280 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.004641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.107964 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.210233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.312835 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.415714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519130 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.519177 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.622661 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.726457 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.829439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.932696 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:44Z","lastTransitionTime":"2026-01-30T21:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.968062 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:00:36.149907326 +0000 UTC Jan 30 21:15:44 crc kubenswrapper[4751]: I0130 21:15:44.975451 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:44 crc kubenswrapper[4751]: E0130 21:15:44.975674 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.035965 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.138693 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.242660 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.345511 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.448857 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.553967 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.658695 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.761265 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790038 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790185 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.790155807 +0000 UTC m=+148.535978496 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790312 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.790416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790507 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790535 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790558 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.790543617 +0000 UTC m=+148.536366306 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790653 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.79063142 +0000 UTC m=+148.536454109 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790778 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790840 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790865 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.790985 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.790948648 +0000 UTC m=+148.536771377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864533 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.864586 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.891600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891834 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891877 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891896 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.891983 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 21:16:49.891958977 +0000 UTC m=+148.637781666 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.967539 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:45Z","lastTransitionTime":"2026-01-30T21:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.992588 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:21:56.706503138 +0000 UTC Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.992900 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.992969 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.993036 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.993105 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:45 crc kubenswrapper[4751]: I0130 21:15:45.993320 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:45 crc kubenswrapper[4751]: E0130 21:15:45.993498 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.071618 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.174445 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.277527 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.380924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.484135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.586894 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689853 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.689912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.792857 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.897713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:46Z","lastTransitionTime":"2026-01-30T21:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.975548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:46 crc kubenswrapper[4751]: E0130 21:15:46.975948 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.991372 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 21:15:46 crc kubenswrapper[4751]: I0130 21:15:46.993460 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:29:01.204800912 +0000 UTC Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.000559 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.102795 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.206461 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.309849 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.413280 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.515937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.516062 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.620900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.724458 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.826932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.826997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.827015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.827040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.827058 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.930313 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:47Z","lastTransitionTime":"2026-01-30T21:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.974907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:47 crc kubenswrapper[4751]: E0130 21:15:47.975122 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.975217 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:47 crc kubenswrapper[4751]: E0130 21:15:47.975437 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.975474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:47 crc kubenswrapper[4751]: E0130 21:15:47.975601 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:47 crc kubenswrapper[4751]: I0130 21:15:47.994015 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:03:52.667128567 +0000 UTC Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.033640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.138899 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.242976 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.346449 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.449705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.553536 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.656939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.657076 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.760501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.863627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.967317 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:48Z","lastTransitionTime":"2026-01-30T21:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.975293 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:48 crc kubenswrapper[4751]: E0130 21:15:48.975519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:48 crc kubenswrapper[4751]: I0130 21:15:48.994431 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:01:01.515163173 +0000 UTC Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.070702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.174489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.277629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.381382 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.484313 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.587458 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622843 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.622862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.645595 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.651362 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.671170 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.676140 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.696939 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.701806 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.723704 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.728761 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.748833 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1d4cc1fb-6004-4472-b0d5-24eeaaa5b5ad\\\",\\\"systemUUID\\\":\\\"e3f105e9-4c39-4cf5-b596-18e2f1b5ccbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:49Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.749045 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.751305 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.854944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.855068 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.958460 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:49Z","lastTransitionTime":"2026-01-30T21:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.974971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.975122 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.975139 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.975191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.975811 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:49 crc kubenswrapper[4751]: E0130 21:15:49.975929 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.976308 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:49 crc kubenswrapper[4751]: I0130 21:15:49.995655 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:55:44.983669792 +0000 UTC Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.061751 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.165912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.268614 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.371858 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475074 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.475104 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.507523 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.509426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.510463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.522516 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f2ae240b09f1cc2add411641298059945f7ad59c7283a209d3720f7d2103d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.535811 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee35b719-afe2-45cf-8726-00c19502f02f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ddb0ae1c9743528a883149f5b88bf7f347418c753bf6630b93b7ff109f08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8dbe95fc625d90ed4660e4b05723a3c9d8ff91ed8fbc53764a662f8e717104a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa1bb0c111f94428408a5a88c259e837f47dcaf9f0339b00247b0075fb03b96d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://876ec931b6253e60015ef6f9ce9023e50873a20fef0edf7b269db590d97faf4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a49f1c60624955f3cac8e86ff89f87859495ffc4b39806d1c51fe35acad01f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73bd842989c72c114b1cbcbaea79d6b8182b4654ccde9f36fc0c5e802124d2b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483fa5ee4b62cec1b52ebf2cd68881f7a23ce16a0b87c8ecccfe2e6ff11098ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g67rv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xxc7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.545786 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c477w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c30a687-0b58-4a63-b9e3-3a3624676358\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zds78\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c477w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.566403 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f993e56-5c22-4c90-970f-15faa6ea54b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f36ab607a38bfd32d8bfe64da36280f9b5efaad895c6c26880a00b9dd38ce5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7187f01f2a4bdab72ec724f553bfce1e954fd9793874021f9c28152b7d33914c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdeab12e361345755bc4e07dae7c7355ad83d93a67d27e35596c4b817e2e7699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7bafdd301335a08edb5982410cee5965742f6b772c88c52ae3630214a4b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ae9a047d02cc4dcd6a27a4561a660059971561db33c72fdaaa10e177e091c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591aa13b2c2298e81c38fc6e0ddbf8f0c5025d86b7c40ec3c5ee4749ce6804a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b43a9d38e68aba1f763848cac4817d99a5f5f11f10a3f3da7ae1ec8845e90b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915d07a3289fc8f3a7221446ffa0562703611899bec4819f77af631ecbeb26c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.577361 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.581202 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89512b2a-f57e-4242-9c12-8b8c660dc530\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 21:14:35.773266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 21:14:35.775281 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-73361594/tls.crt::/tmp/serving-cert-73361594/tls.key\\\\\\\"\\\\nI0130 21:14:41.502522 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 21:14:41.506707 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 21:14:41.506748 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 21:14:41.506783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 21:14:41.506793 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 21:14:41.515702 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 21:14:41.515765 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515779 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 21:14:41.515791 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 21:14:41.515799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 21:14:41.515807 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 21:14:41.515814 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 21:14:41.515924 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 21:14:41.517542 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.593725 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.607316 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5sgk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcecdc4b-6607-4e4e-a9b5-49b85c030d21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:35Z\\\",\\\"message\\\":\\\"2026-01-30T21:14:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c\\\\n2026-01-30T21:14:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5dc3d9cf-ea3c-4a0d-90cc-2d599ddcdb3c to /host/opt/cni/bin/\\\\n2026-01-30T21:14:50Z [verbose] multus-daemon started\\\\n2026-01-30T21:14:50Z [verbose] Readiness Indicator file check\\\\n2026-01-30T21:15:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qx87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5sgk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.617695 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8627de81-5598-4e77-b895-c17fe64fde13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe6b37689f97464405ccee9a22eff435e66be2c6103b5187255056bf0febaec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://657bffa589cf69814f91728996ae779354f7ad9f62606bbba6fcc4107a06cfb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.631919 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.649567 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"876db467-5de4-469d-926f-72bd7360ff97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://751b154b2de8ba6d171eac7b82c77498ed54b38d4c6759e35dacf49c57e3f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27dab33e7af8b89e8b6a3f6d3beff399121ca17e50406b83ec8a553598834ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037db6ab27fdac0f9290b2d34f883cc22ac3c79f2b52a16e6579df97474da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ffacefbd313f603b7a1c12199867c460857e78f04e16955e96d20a495a2b0ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.665138 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e918b2544a84293e9be54ec36dc115ff3564e606d2953c4110aff41fbd9daa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6210ea6f840dcec9dc1c58458d2d91b90bdb0ad4b50c22a85dc4c8bbfef4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.680253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.689289 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39b5cd533a7273944582a2bc1479b8aa3aa3cdb8b8f1dff93796c6e7940d6152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.707677 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xdclq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e4f7eaf-acd6-4cf5-874c-d88c4e479113\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63653832e8a917a1814a5babc5da64d8e4ad73452853bbdb3f8bf252af2f8eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6qcg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xdclq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.727168 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T21:15:18Z\\\",\\\"message\\\":\\\"ss-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0130 21:15:18.987753 6489 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0130 21:15:18.987765 6489 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF0130 21:15:18.987796 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:18Z is after 2025-08-24T17:21:41Z]\\\\nI0130 21:15:18.987648 6489 obj_retry.go:303] Retry object setup: *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T21:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8497\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8bjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.738104 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9acdd0f1-560b-4246-b045-c598c5bbb93d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31324435aaf6459db3a2e3b8cc5600f74e016017ad4f4d2030a94ebaec5e3203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tfrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vgfkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.748545 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h48zj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35ca91c5-bd9e-486b-943d-8123e2f6e84c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cc9a0aae783cc7232475cc962a2a07ef0e9a6d68bfc35ff8517c62baa81425b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wx9j8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h48zj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.758956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ffb2e7-2c69-43bb-84cd-821c1dffd7d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fd6f086c5baa031014c2877f088a3181d544f8061216a298cdaa18d3b93ae3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0785563e96c2a61d6f1a465fb1a6cef031e778fdb236a7e42bcda6377b043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccttm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:15:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8h2z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.772389 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40bed914-8f0a-4610-a000-bc21cfc0e991\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45acae740227cb4cde7e145d4a4af5378c1023ec232ed6539ff9a38eaa3e1c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9cc6a0538abdc1df676991fe1e1c8b3517a34f93a1df18fa8dc225bb45377db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a41f9e619b34b0433da8d70490a2904d2552937dfff32ad535dffa3b738447c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T21:14:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T21:14:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.782772 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.785859 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T21:14:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T21:15:50Z is after 2025-08-24T17:21:41Z" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.886501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.975052 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:50 crc kubenswrapper[4751]: E0130 21:15:50.975314 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.989505 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:50Z","lastTransitionTime":"2026-01-30T21:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:50 crc kubenswrapper[4751]: I0130 21:15:50.996474 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:45:04.616220289 +0000 UTC Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.092818 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.195853 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.299675 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.402473 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.505714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.516046 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.517092 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/2.log" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.520974 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" exitCode=1 Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.521026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.521099 4751 scope.go:117] "RemoveContainer" containerID="520d9e5f3264983c4a2f6ff3a3b47dfbcbb476f832aee33055230fdb25dcdfab" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.522015 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.522295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.597946 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=34.59791928 podStartE2EDuration="34.59791928s" podCreationTimestamp="2026-01-30 21:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.579416062 +0000 UTC m=+90.325238751" watchObservedRunningTime="2026-01-30 21:15:51.59791928 +0000 UTC m=+90.343741969" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.609436 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.639589 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xdclq" podStartSLOduration=63.639555211 podStartE2EDuration="1m3.639555211s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.637934236 +0000 UTC m=+90.383756905" watchObservedRunningTime="2026-01-30 21:15:51.639555211 +0000 UTC m=+90.385377900" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.699793 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podStartSLOduration=63.699766061 podStartE2EDuration="1m3.699766061s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.699515934 +0000 UTC m=+90.445338593" watchObservedRunningTime="2026-01-30 21:15:51.699766061 +0000 UTC m=+90.445588740" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.712860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.731123 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-h48zj" podStartSLOduration=63.731099159 podStartE2EDuration="1m3.731099159s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.71577832 +0000 UTC m=+90.461600989" watchObservedRunningTime="2026-01-30 21:15:51.731099159 +0000 UTC m=+90.476921838" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.755526 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8h2z8" podStartSLOduration=62.755498478 podStartE2EDuration="1m2.755498478s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.730865443 +0000 UTC m=+90.476688122" watchObservedRunningTime="2026-01-30 21:15:51.755498478 +0000 UTC m=+90.501321167" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.756593 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.756579368 podStartE2EDuration="1m8.756579368s" podCreationTimestamp="2026-01-30 21:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.755289513 +0000 UTC m=+90.501112172" watchObservedRunningTime="2026-01-30 21:15:51.756579368 +0000 UTC m=+90.502402067" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.798860 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.798826205 podStartE2EDuration="1m9.798826205s" podCreationTimestamp="2026-01-30 21:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.775939758 +0000 UTC m=+90.521762447" watchObservedRunningTime="2026-01-30 21:15:51.798826205 +0000 UTC m=+90.544648894" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.815444 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.835392 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xxc7s" podStartSLOduration=63.835372247 podStartE2EDuration="1m3.835372247s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.835349866 +0000 UTC m=+90.581172515" watchObservedRunningTime="2026-01-30 21:15:51.835372247 +0000 UTC m=+90.581194916" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.879000 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.878971882 podStartE2EDuration="11.878971882s" podCreationTimestamp="2026-01-30 21:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.878202302 +0000 UTC m=+90.624024961" watchObservedRunningTime="2026-01-30 21:15:51.878971882 +0000 UTC m=+90.624794561" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.917529 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:51Z","lastTransitionTime":"2026-01-30T21:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.929799 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5sgk2" podStartSLOduration=63.929775415 podStartE2EDuration="1m3.929775415s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.928414418 +0000 UTC m=+90.674237077" watchObservedRunningTime="2026-01-30 21:15:51.929775415 +0000 UTC m=+90.675598064" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.943756 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.943734787 podStartE2EDuration="5.943734787s" podCreationTimestamp="2026-01-30 21:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:15:51.943421649 +0000 UTC m=+90.689244328" watchObservedRunningTime="2026-01-30 21:15:51.943734787 +0000 UTC m=+90.689557446" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.974964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.974993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.975064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.976246 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.976497 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:51 crc kubenswrapper[4751]: E0130 21:15:51.976412 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:51 crc kubenswrapper[4751]: I0130 21:15:51.996826 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:09:07.556820723 +0000 UTC Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.020833 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.123923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124353 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.124383 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.227489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.329920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.329975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.329998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.330025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.330047 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.433525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.526698 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.530535 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:15:52 crc kubenswrapper[4751]: E0130 21:15:52.530807 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.535692 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.638350 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.741713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.844782 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.947277 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:52Z","lastTransitionTime":"2026-01-30T21:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.974828 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:52 crc kubenswrapper[4751]: E0130 21:15:52.974986 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:52 crc kubenswrapper[4751]: I0130 21:15:52.997231 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:25:35.011761069 +0000 UTC Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.050629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.164985 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.267549 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.369986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.477558 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.581248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.581670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.581873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.582065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.582254 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.685379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.788600 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.891960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.892089 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.975104 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.975221 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.975307 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:53 crc kubenswrapper[4751]: E0130 21:15:53.975775 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:53 crc kubenswrapper[4751]: E0130 21:15:53.975917 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:53 crc kubenswrapper[4751]: E0130 21:15:53.976052 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.996209 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:53Z","lastTransitionTime":"2026-01-30T21:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:53 crc kubenswrapper[4751]: I0130 21:15:53.998299 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:53:39.269953392 +0000 UTC Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.098897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.098976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.099000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.099030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.099049 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.202507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.305992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.306136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409855 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.409899 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.513475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.616630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.719860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.822981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.823118 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.925805 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:54Z","lastTransitionTime":"2026-01-30T21:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:54 crc kubenswrapper[4751]: I0130 21:15:54.975650 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:54 crc kubenswrapper[4751]: E0130 21:15:54.975832 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.000239 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:39:36.922935349 +0000 UTC Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.028693 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.131200 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234135 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.234159 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.336889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.336977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.336995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.337021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.337040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.440575 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.543228 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.645879 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.748593 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.852553 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.954874 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:55Z","lastTransitionTime":"2026-01-30T21:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.975498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:55 crc kubenswrapper[4751]: E0130 21:15:55.975610 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.975628 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:55 crc kubenswrapper[4751]: I0130 21:15:55.975667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:55 crc kubenswrapper[4751]: E0130 21:15:55.975761 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:55 crc kubenswrapper[4751]: E0130 21:15:55.976032 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.000879 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:39:57.563036608 +0000 UTC Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.058523 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.162489 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267073 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.267215 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.370999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.371082 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474622 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.474682 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.578705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.681408 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.784315 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888177 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.888252 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.975600 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:56 crc kubenswrapper[4751]: E0130 21:15:56.975752 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:56 crc kubenswrapper[4751]: I0130 21:15:56.990732 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:56Z","lastTransitionTime":"2026-01-30T21:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.001875 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:30:30.305306454 +0000 UTC Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.094366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.197633 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.300668 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.404541 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.507183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.610229 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.713297 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.817367 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.920822 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:57Z","lastTransitionTime":"2026-01-30T21:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.976043 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.976151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:57 crc kubenswrapper[4751]: E0130 21:15:57.976214 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:57 crc kubenswrapper[4751]: I0130 21:15:57.976063 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:57 crc kubenswrapper[4751]: E0130 21:15:57.976463 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:57 crc kubenswrapper[4751]: E0130 21:15:57.976704 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.002623 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:51:37.662484531 +0000 UTC Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.022904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.022979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.023001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.023033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.023057 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.125922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.228712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.332122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.434987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.435005 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.537548 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.641197 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.744603 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.847945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.848072 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.950932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.951077 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:58Z","lastTransitionTime":"2026-01-30T21:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:58 crc kubenswrapper[4751]: I0130 21:15:58.974724 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:15:58 crc kubenswrapper[4751]: E0130 21:15:58.974912 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.002795 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:25:53.782304828 +0000 UTC Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.053678 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.157315 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.267620 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.371363 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.474524 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.577936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.681391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.784923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.785083 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.871310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T21:15:59Z","lastTransitionTime":"2026-01-30T21:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.934777 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss"] Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.935721 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.938661 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.938827 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.939035 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.939172 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942684 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/03cfe98a-5efe-4a69-856e-1bcf960c268a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942743 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03cfe98a-5efe-4a69-856e-1bcf960c268a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.942861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03cfe98a-5efe-4a69-856e-1bcf960c268a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.975652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:15:59 crc kubenswrapper[4751]: E0130 21:15:59.975901 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.975656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:15:59 crc kubenswrapper[4751]: E0130 21:15:59.976018 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:15:59 crc kubenswrapper[4751]: I0130 21:15:59.975652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:15:59 crc kubenswrapper[4751]: E0130 21:15:59.976103 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.003384 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:16:08.760725411 +0000 UTC Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.003487 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.015002 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/03cfe98a-5efe-4a69-856e-1bcf960c268a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044623 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03cfe98a-5efe-4a69-856e-1bcf960c268a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044657 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03cfe98a-5efe-4a69-856e-1bcf960c268a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044865 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.044898 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/03cfe98a-5efe-4a69-856e-1bcf960c268a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.046083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/03cfe98a-5efe-4a69-856e-1bcf960c268a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.066266 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03cfe98a-5efe-4a69-856e-1bcf960c268a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.084532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03cfe98a-5efe-4a69-856e-1bcf960c268a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gsgss\" (UID: \"03cfe98a-5efe-4a69-856e-1bcf960c268a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.259492 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.563543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" event={"ID":"03cfe98a-5efe-4a69-856e-1bcf960c268a","Type":"ContainerStarted","Data":"73f5adb8104d10ae116d6d8493bce49ce348c1420aff1a924f15e589046e84b9"} Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.564037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" event={"ID":"03cfe98a-5efe-4a69-856e-1bcf960c268a","Type":"ContainerStarted","Data":"4218678d613e9e169f18ff75017b1e959a1d8bbfd899f8fd6908a20155d85e25"} Jan 30 21:16:00 crc kubenswrapper[4751]: I0130 21:16:00.974994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:00 crc kubenswrapper[4751]: E0130 21:16:00.975167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:01 crc kubenswrapper[4751]: I0130 21:16:01.974798 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:01 crc kubenswrapper[4751]: I0130 21:16:01.974848 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:01 crc kubenswrapper[4751]: E0130 21:16:01.978112 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:01 crc kubenswrapper[4751]: I0130 21:16:01.978172 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:01 crc kubenswrapper[4751]: E0130 21:16:01.978360 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:01 crc kubenswrapper[4751]: E0130 21:16:01.978612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:02 crc kubenswrapper[4751]: I0130 21:16:02.975647 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:02 crc kubenswrapper[4751]: E0130 21:16:02.975854 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:03 crc kubenswrapper[4751]: I0130 21:16:03.975051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:03 crc kubenswrapper[4751]: I0130 21:16:03.975231 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:03 crc kubenswrapper[4751]: I0130 21:16:03.975281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:03 crc kubenswrapper[4751]: E0130 21:16:03.975625 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:03 crc kubenswrapper[4751]: E0130 21:16:03.975743 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:03 crc kubenswrapper[4751]: E0130 21:16:03.975905 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:04 crc kubenswrapper[4751]: I0130 21:16:04.976044 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:04 crc kubenswrapper[4751]: E0130 21:16:04.976758 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:04 crc kubenswrapper[4751]: I0130 21:16:04.976995 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:16:04 crc kubenswrapper[4751]: E0130 21:16:04.977227 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:16:05 crc kubenswrapper[4751]: I0130 21:16:05.975685 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:05 crc kubenswrapper[4751]: I0130 21:16:05.975705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:05 crc kubenswrapper[4751]: I0130 21:16:05.975759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:05 crc kubenswrapper[4751]: E0130 21:16:05.977213 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:05 crc kubenswrapper[4751]: E0130 21:16:05.977269 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:05 crc kubenswrapper[4751]: E0130 21:16:05.977320 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:06 crc kubenswrapper[4751]: I0130 21:16:06.974971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:06 crc kubenswrapper[4751]: E0130 21:16:06.975472 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.027728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.027967 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.028042 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs podName:3c30a687-0b58-4a63-b9e3-3a3624676358 nodeName:}" failed. No retries permitted until 2026-01-30 21:17:11.028017348 +0000 UTC m=+169.773840027 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs") pod "network-metrics-daemon-c477w" (UID: "3c30a687-0b58-4a63-b9e3-3a3624676358") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.974876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.974961 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:07 crc kubenswrapper[4751]: I0130 21:16:07.975092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.975098 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.975297 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:07 crc kubenswrapper[4751]: E0130 21:16:07.975611 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:08 crc kubenswrapper[4751]: I0130 21:16:08.975362 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:08 crc kubenswrapper[4751]: E0130 21:16:08.975654 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:09 crc kubenswrapper[4751]: I0130 21:16:09.975214 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:09 crc kubenswrapper[4751]: I0130 21:16:09.975265 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:09 crc kubenswrapper[4751]: E0130 21:16:09.975433 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:09 crc kubenswrapper[4751]: I0130 21:16:09.975510 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:09 crc kubenswrapper[4751]: E0130 21:16:09.975625 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:09 crc kubenswrapper[4751]: E0130 21:16:09.975803 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:10 crc kubenswrapper[4751]: I0130 21:16:10.983808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:10 crc kubenswrapper[4751]: E0130 21:16:10.983999 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:11 crc kubenswrapper[4751]: I0130 21:16:11.975502 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:11 crc kubenswrapper[4751]: E0130 21:16:11.975661 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:11 crc kubenswrapper[4751]: I0130 21:16:11.976611 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:11 crc kubenswrapper[4751]: I0130 21:16:11.976702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:11 crc kubenswrapper[4751]: E0130 21:16:11.977245 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:11 crc kubenswrapper[4751]: E0130 21:16:11.977436 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:12 crc kubenswrapper[4751]: I0130 21:16:12.975700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:12 crc kubenswrapper[4751]: E0130 21:16:12.975937 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:13 crc kubenswrapper[4751]: I0130 21:16:13.975258 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:13 crc kubenswrapper[4751]: I0130 21:16:13.975273 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:13 crc kubenswrapper[4751]: I0130 21:16:13.975392 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:13 crc kubenswrapper[4751]: E0130 21:16:13.975570 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:13 crc kubenswrapper[4751]: E0130 21:16:13.975695 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:13 crc kubenswrapper[4751]: E0130 21:16:13.975778 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:14 crc kubenswrapper[4751]: I0130 21:16:14.975545 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:14 crc kubenswrapper[4751]: E0130 21:16:14.975712 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:15 crc kubenswrapper[4751]: I0130 21:16:15.975284 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:15 crc kubenswrapper[4751]: I0130 21:16:15.975560 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:15 crc kubenswrapper[4751]: I0130 21:16:15.975853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:15 crc kubenswrapper[4751]: E0130 21:16:15.975830 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:15 crc kubenswrapper[4751]: E0130 21:16:15.976047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:15 crc kubenswrapper[4751]: E0130 21:16:15.976245 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:16 crc kubenswrapper[4751]: I0130 21:16:16.975314 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:16 crc kubenswrapper[4751]: E0130 21:16:16.975516 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:16 crc kubenswrapper[4751]: I0130 21:16:16.976379 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:16:16 crc kubenswrapper[4751]: E0130 21:16:16.976620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8bjd_openshift-ovn-kubernetes(3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" Jan 30 21:16:17 crc kubenswrapper[4751]: I0130 21:16:17.976078 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:17 crc kubenswrapper[4751]: I0130 21:16:17.976096 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:17 crc kubenswrapper[4751]: I0130 21:16:17.976138 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:17 crc kubenswrapper[4751]: E0130 21:16:17.976303 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:17 crc kubenswrapper[4751]: E0130 21:16:17.976542 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:17 crc kubenswrapper[4751]: E0130 21:16:17.976717 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:18 crc kubenswrapper[4751]: I0130 21:16:18.975046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:18 crc kubenswrapper[4751]: E0130 21:16:18.975221 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:19 crc kubenswrapper[4751]: I0130 21:16:19.975641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:19 crc kubenswrapper[4751]: I0130 21:16:19.975723 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:19 crc kubenswrapper[4751]: E0130 21:16:19.975870 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:19 crc kubenswrapper[4751]: I0130 21:16:19.976198 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:19 crc kubenswrapper[4751]: E0130 21:16:19.976312 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:19 crc kubenswrapper[4751]: E0130 21:16:19.976709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:20 crc kubenswrapper[4751]: I0130 21:16:20.975602 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:20 crc kubenswrapper[4751]: E0130 21:16:20.975822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:21 crc kubenswrapper[4751]: I0130 21:16:21.975580 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:21 crc kubenswrapper[4751]: I0130 21:16:21.975621 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:21 crc kubenswrapper[4751]: E0130 21:16:21.977663 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:21 crc kubenswrapper[4751]: I0130 21:16:21.977939 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:21 crc kubenswrapper[4751]: E0130 21:16:21.977950 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:21 crc kubenswrapper[4751]: E0130 21:16:21.978560 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.011705 4751 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.100511 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.647652 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648565 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/0.log" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648640 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" exitCode=1 Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648682 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerDied","Data":"2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02"} Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.648727 4751 scope.go:117] "RemoveContainer" containerID="a93dc5c7348691e2c7629ae7c688abc2eb17c980a67bc936a16f19deed99928c" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.649366 4751 scope.go:117] "RemoveContainer" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.649619 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5sgk2_openshift-multus(bcecdc4b-6607-4e4e-a9b5-49b85c030d21)\"" pod="openshift-multus/multus-5sgk2" podUID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.678126 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gsgss" podStartSLOduration=94.678071844 podStartE2EDuration="1m34.678071844s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:00.584776197 +0000 UTC m=+99.330598936" watchObservedRunningTime="2026-01-30 21:16:22.678071844 +0000 UTC m=+121.423894533" Jan 30 21:16:22 crc kubenswrapper[4751]: I0130 21:16:22.975168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:22 crc kubenswrapper[4751]: E0130 21:16:22.975395 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.655283 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.975211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.975318 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:23 crc kubenswrapper[4751]: I0130 21:16:23.975223 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:23 crc kubenswrapper[4751]: E0130 21:16:23.975689 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:23 crc kubenswrapper[4751]: E0130 21:16:23.975826 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:23 crc kubenswrapper[4751]: E0130 21:16:23.975553 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:24 crc kubenswrapper[4751]: I0130 21:16:24.975378 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:24 crc kubenswrapper[4751]: E0130 21:16:24.976387 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:25 crc kubenswrapper[4751]: I0130 21:16:25.975692 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:25 crc kubenswrapper[4751]: E0130 21:16:25.975850 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:25 crc kubenswrapper[4751]: I0130 21:16:25.975698 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:25 crc kubenswrapper[4751]: E0130 21:16:25.975968 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:25 crc kubenswrapper[4751]: I0130 21:16:25.976046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:25 crc kubenswrapper[4751]: E0130 21:16:25.976123 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:26 crc kubenswrapper[4751]: I0130 21:16:26.975072 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:26 crc kubenswrapper[4751]: E0130 21:16:26.975262 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.101874 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:27 crc kubenswrapper[4751]: I0130 21:16:27.976158 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:27 crc kubenswrapper[4751]: I0130 21:16:27.976309 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:27 crc kubenswrapper[4751]: I0130 21:16:27.976302 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.976456 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.976824 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:27 crc kubenswrapper[4751]: E0130 21:16:27.976980 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:28 crc kubenswrapper[4751]: I0130 21:16:28.975838 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:28 crc kubenswrapper[4751]: E0130 21:16:28.976043 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:29 crc kubenswrapper[4751]: I0130 21:16:29.974824 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:29 crc kubenswrapper[4751]: I0130 21:16:29.974821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:29 crc kubenswrapper[4751]: I0130 21:16:29.975160 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:29 crc kubenswrapper[4751]: E0130 21:16:29.975143 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:29 crc kubenswrapper[4751]: E0130 21:16:29.975413 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:29 crc kubenswrapper[4751]: E0130 21:16:29.975539 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:30 crc kubenswrapper[4751]: I0130 21:16:30.975193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:30 crc kubenswrapper[4751]: E0130 21:16:30.975429 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.975157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:31 crc kubenswrapper[4751]: E0130 21:16:31.977129 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.977275 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.977356 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:31 crc kubenswrapper[4751]: E0130 21:16:31.977870 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:31 crc kubenswrapper[4751]: E0130 21:16:31.978047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:31 crc kubenswrapper[4751]: I0130 21:16:31.978493 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:16:32 crc kubenswrapper[4751]: E0130 21:16:32.102734 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.691430 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.694789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerStarted","Data":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.695261 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.743036 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podStartSLOduration=104.74301404 podStartE2EDuration="1m44.74301404s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:32.742600829 +0000 UTC m=+131.488423538" watchObservedRunningTime="2026-01-30 21:16:32.74301404 +0000 UTC m=+131.488836729" Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.892807 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c477w"] Jan 30 21:16:32 crc kubenswrapper[4751]: I0130 21:16:32.892943 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:32 crc kubenswrapper[4751]: E0130 21:16:32.893033 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:33 crc kubenswrapper[4751]: I0130 21:16:33.978606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:33 crc kubenswrapper[4751]: I0130 21:16:33.978606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:33 crc kubenswrapper[4751]: I0130 21:16:33.978760 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:33 crc kubenswrapper[4751]: E0130 21:16:33.979693 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:33 crc kubenswrapper[4751]: E0130 21:16:33.979855 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:33 crc kubenswrapper[4751]: E0130 21:16:33.980292 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:34 crc kubenswrapper[4751]: I0130 21:16:34.975112 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:34 crc kubenswrapper[4751]: E0130 21:16:34.975519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:35 crc kubenswrapper[4751]: I0130 21:16:35.975248 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:35 crc kubenswrapper[4751]: I0130 21:16:35.975309 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:35 crc kubenswrapper[4751]: E0130 21:16:35.975516 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:35 crc kubenswrapper[4751]: E0130 21:16:35.975666 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:35 crc kubenswrapper[4751]: I0130 21:16:35.976557 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:35 crc kubenswrapper[4751]: E0130 21:16:35.976843 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:36 crc kubenswrapper[4751]: I0130 21:16:36.974774 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:36 crc kubenswrapper[4751]: E0130 21:16:36.975011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.103462 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.974881 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.974911 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.974999 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.975044 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.975134 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:37 crc kubenswrapper[4751]: E0130 21:16:37.975439 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:37 crc kubenswrapper[4751]: I0130 21:16:37.975807 4751 scope.go:117] "RemoveContainer" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" Jan 30 21:16:38 crc kubenswrapper[4751]: I0130 21:16:38.718766 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:16:38 crc kubenswrapper[4751]: I0130 21:16:38.718867 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344"} Jan 30 21:16:38 crc kubenswrapper[4751]: I0130 21:16:38.974820 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:38 crc kubenswrapper[4751]: E0130 21:16:38.975086 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:39 crc kubenswrapper[4751]: I0130 21:16:39.975647 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:39 crc kubenswrapper[4751]: I0130 21:16:39.975689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:39 crc kubenswrapper[4751]: I0130 21:16:39.975689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:39 crc kubenswrapper[4751]: E0130 21:16:39.975835 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:39 crc kubenswrapper[4751]: E0130 21:16:39.976144 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:39 crc kubenswrapper[4751]: E0130 21:16:39.976408 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:40 crc kubenswrapper[4751]: I0130 21:16:40.975222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:40 crc kubenswrapper[4751]: E0130 21:16:40.975531 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c477w" podUID="3c30a687-0b58-4a63-b9e3-3a3624676358" Jan 30 21:16:41 crc kubenswrapper[4751]: I0130 21:16:41.975276 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:41 crc kubenswrapper[4751]: E0130 21:16:41.975501 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 21:16:41 crc kubenswrapper[4751]: I0130 21:16:41.975769 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:41 crc kubenswrapper[4751]: E0130 21:16:41.977807 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 21:16:41 crc kubenswrapper[4751]: I0130 21:16:41.977852 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:41 crc kubenswrapper[4751]: E0130 21:16:41.977983 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 21:16:42 crc kubenswrapper[4751]: I0130 21:16:42.975696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:16:42 crc kubenswrapper[4751]: I0130 21:16:42.978629 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:16:42 crc kubenswrapper[4751]: I0130 21:16:42.981492 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.975958 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.976198 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.976315 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.979255 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.979815 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.980081 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:16:43 crc kubenswrapper[4751]: I0130 21:16:43.980843 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.811951 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.812123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.812166 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.812222 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:49 crc kubenswrapper[4751]: E0130 21:16:49.813765 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:18:51.81372493 +0000 UTC m=+270.559547619 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.814608 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.821435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.821633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.913815 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:49 crc kubenswrapper[4751]: I0130 21:16:49.918521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.000648 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.020582 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.030702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 21:16:50 crc kubenswrapper[4751]: W0130 21:16:50.354880 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c WatchSource:0}: Error finding container 727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c: Status 404 returned error can't find the container with id 727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c Jan 30 21:16:50 crc kubenswrapper[4751]: W0130 21:16:50.355736 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288 WatchSource:0}: Error finding container 687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288: Status 404 returned error can't find the container with id 687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288 Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.773317 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4846ea91681b12b397a16f38132275101045c23a0d3e5e9dba986e645f244544"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.773488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"687c61c41b15949ac0b4bcc29898828f27a92c0d9a00299b29e187cd71555288"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.773816 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.775799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c202d23928b1157b9956dbb2cccc152128915eb12dce8495093c51e98b2c8d24"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.775880 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"727a24abfc6bee03bd6244b50025642bd0da9aee41457573e4c40e2d5ba4578c"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.778398 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3ba19b1bad242d06c2107c879585907e0de0a87cee578c9543f6d2a667e133a1"} Jan 30 21:16:50 crc kubenswrapper[4751]: I0130 21:16:50.778455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"40cd9d502644359f6d3f370ebb3f673f5f463ca6bac4f1911d1c8e539a8f92c7"} Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.024018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.091583 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nk5rn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.092358 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.100597 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-plkp9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.102919 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.103746 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.104067 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.118213 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.119566 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.119752 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120038 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120299 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120486 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120634 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.120811 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.126306 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.126992 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.127519 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.128126 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.128619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.129095 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.129597 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.129950 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130116 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130167 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130287 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130415 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130516 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130525 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130583 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130650 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130693 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130785 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.130653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.137402 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9wvms"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.137964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.139977 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.140549 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.140635 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x98hg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.142069 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.143718 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.144057 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.145684 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.145834 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.145984 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.146116 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.147084 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8l2v5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.147664 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.150987 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.151474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.152400 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.152540 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.152850 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.153473 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154133 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154412 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154689 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.154937 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.155625 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.155783 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.155927 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.156250 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gckm"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.156356 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.156683 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.158308 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.158869 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164395 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164657 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164684 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164702 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164801 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164849 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164861 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164667 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164948 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.164979 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.195739 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.200936 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201158 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201397 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201537 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201729 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201865 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.201993 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.202118 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.202234 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.203121 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.203271 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.203502 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.199310 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.206167 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.207893 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.208864 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209024 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209127 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209220 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209450 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209580 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.208476 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.209898 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.210542 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.210919 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.210994 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211095 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211152 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211238 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211354 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211452 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211536 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211647 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211821 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211931 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.212021 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.211932 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220427 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220557 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220840 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220857 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220866 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.220879 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.221946 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.221957 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zpvhg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.222560 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.222581 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-w42cs"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.223146 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.223269 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.223539 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mp5g5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.224089 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.224265 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.224297 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.226259 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.226757 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.227185 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.230767 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.233839 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.234298 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.234975 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235015 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235123 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235200 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235250 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-images\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/80b9c760-3c34-42cb-bb23-1f11dad50e58-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235766 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-audit\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235781 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppkm\" (UniqueName: \"kubernetes.io/projected/6d872f03-d4d0-49bc-9758-05060035dafa-kube-api-access-gppkm\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235827 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmck\" (UniqueName: \"kubernetes.io/projected/5c9671c2-84f9-4719-b497-4fa77803105b-kube-api-access-9pmck\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3639569-5d39-4fa1-863c-45307b3da476-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-image-import-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-config\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235924 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4ed27b8-56e7-4e93-aea6-83adae8affb6-serving-cert\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235941 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9671c2-84f9-4719-b497-4fa77803105b-serving-cert\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235957 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-serving-cert\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235974 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.235990 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-service-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236007 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-encryption-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236029 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.237070 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.237218 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.237879 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238392 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.236036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfsf5\" (UniqueName: \"kubernetes.io/projected/f3639569-5d39-4fa1-863c-45307b3da476-kube-api-access-zfsf5\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238579 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-auth-proxy-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238722 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-machine-approver-tls\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xpm\" (UniqueName: \"kubernetes.io/projected/b4ed27b8-56e7-4e93-aea6-83adae8affb6-kube-api-access-v7xpm\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vh8\" (UniqueName: \"kubernetes.io/projected/ebb4c857-4f54-440f-81d7-74eadc588099-kube-api-access-t4vh8\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238837 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb4c857-4f54-440f-81d7-74eadc588099-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238919 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-trusted-ca\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-node-pullsecrets\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238964 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238979 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.238995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k2d\" (UniqueName: \"kubernetes.io/projected/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-kube-api-access-n8k2d\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-config\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9c760-3c34-42cb-bb23-1f11dad50e58-serving-cert\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239096 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239137 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-config\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-audit-dir\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239247 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjkx\" (UniqueName: \"kubernetes.io/projected/21dc9dc0-702d-49a7-baed-f8e70f6867f3-kube-api-access-mpjkx\") pod \"downloads-7954f5f757-8l2v5\" (UID: \"21dc9dc0-702d-49a7-baed-f8e70f6867f3\") " pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239385 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-client\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.239480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-serving-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240126 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3639569-5d39-4fa1-863c-45307b3da476-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.240144 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w78rj\" (UniqueName: \"kubernetes.io/projected/80b9c760-3c34-42cb-bb23-1f11dad50e58-kube-api-access-w78rj\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.241274 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.250359 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.250851 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.253596 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.254071 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.266651 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.268504 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4lnd"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.269046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.269515 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.269813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.289658 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.289814 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nk5rn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.290397 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.291208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.297908 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.300372 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.303291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.304570 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.307227 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.307476 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zzk29"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.308285 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.308922 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.309567 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.310157 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qthvh"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.310884 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.311602 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.312181 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.312846 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.313424 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.314103 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.317705 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.318178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.318659 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.319120 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.319756 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m2zrs"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.320396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.320560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.321697 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-plkp9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.323301 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.327549 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x98hg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.333839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.335404 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.336925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.337791 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.339950 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-client\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340764 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-serving-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340832 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3639569-5d39-4fa1-863c-45307b3da476-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w78rj\" (UniqueName: \"kubernetes.io/projected/80b9c760-3c34-42cb-bb23-1f11dad50e58-kube-api-access-w78rj\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340876 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/80b9c760-3c34-42cb-bb23-1f11dad50e58-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-images\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340921 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-audit\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.340980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341003 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppkm\" (UniqueName: \"kubernetes.io/projected/6d872f03-d4d0-49bc-9758-05060035dafa-kube-api-access-gppkm\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341049 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmck\" (UniqueName: \"kubernetes.io/projected/5c9671c2-84f9-4719-b497-4fa77803105b-kube-api-access-9pmck\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3639569-5d39-4fa1-863c-45307b3da476-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-image-import-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-config\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4ed27b8-56e7-4e93-aea6-83adae8affb6-serving-cert\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341204 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9671c2-84f9-4719-b497-4fa77803105b-serving-cert\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341247 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-serving-cert\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341266 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341285 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-service-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341307 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-encryption-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfsf5\" (UniqueName: \"kubernetes.io/projected/f3639569-5d39-4fa1-863c-45307b3da476-kube-api-access-zfsf5\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341370 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-auth-proxy-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xpm\" (UniqueName: \"kubernetes.io/projected/b4ed27b8-56e7-4e93-aea6-83adae8affb6-kube-api-access-v7xpm\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341417 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341439 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-machine-approver-tls\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vh8\" (UniqueName: \"kubernetes.io/projected/ebb4c857-4f54-440f-81d7-74eadc588099-kube-api-access-t4vh8\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341506 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb4c857-4f54-440f-81d7-74eadc588099-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-trusted-ca\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341609 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-node-pullsecrets\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341630 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341670 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k2d\" (UniqueName: \"kubernetes.io/projected/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-kube-api-access-n8k2d\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-config\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9c760-3c34-42cb-bb23-1f11dad50e58-serving-cert\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341776 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-config\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-audit-dir\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjkx\" (UniqueName: \"kubernetes.io/projected/21dc9dc0-702d-49a7-baed-f8e70f6867f3-kube-api-access-mpjkx\") pod \"downloads-7954f5f757-8l2v5\" (UID: \"21dc9dc0-702d-49a7-baed-f8e70f6867f3\") " pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.341867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.342301 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.342790 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343076 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-audit\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8l2v5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-image-import-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.343986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3639569-5d39-4fa1-863c-45307b3da476-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.344545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-config\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.344992 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.346054 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.346509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.346782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-serving-cert\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.347917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/80b9c760-3c34-42cb-bb23-1f11dad50e58-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.348224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-node-pullsecrets\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.348907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.349728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-serving-ca\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.349950 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.350371 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebb4c857-4f54-440f-81d7-74eadc588099-images\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.350436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d872f03-d4d0-49bc-9758-05060035dafa-audit-dir\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.350605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351291 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-trusted-ca\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-config\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.351875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.352184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9671c2-84f9-4719-b497-4fa77803105b-service-ca-bundle\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.352525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.354450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d872f03-d4d0-49bc-9758-05060035dafa-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.354773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.355687 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4ed27b8-56e7-4e93-aea6-83adae8affb6-serving-cert\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.355922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9c760-3c34-42cb-bb23-1f11dad50e58-serving-cert\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.356319 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.356640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-etcd-client\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.356958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.357049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-auth-proxy-config\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.358273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.358277 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.360534 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4ed27b8-56e7-4e93-aea6-83adae8affb6-config\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.361542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d872f03-d4d0-49bc-9758-05060035dafa-encryption-config\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.361917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.361962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-machine-approver-tls\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.366066 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.367149 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zpvhg"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.368107 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.369056 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.373990 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.375966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3639569-5d39-4fa1-863c-45307b3da476-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.377492 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c9671c2-84f9-4719-b497-4fa77803105b-serving-cert\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.378355 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.378867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebb4c857-4f54-440f-81d7-74eadc588099-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.378995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.379572 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.380395 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.384361 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9wvms"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.384471 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.385480 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.387178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.388651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gckm"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.390226 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.391414 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mp5g5"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.391662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.392815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4lnd"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.393864 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hrfwj"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.394386 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.394652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.397239 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zzk29"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.398491 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.409272 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.411148 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.413059 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m2zrs"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.414787 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.415060 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qthvh"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.416770 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.418817 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.420892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hrfwj"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.423604 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2hvtm"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.424319 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.424831 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tw9q7"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.426249 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.426280 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tw9q7"] Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.434439 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.453457 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.474805 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.494565 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.514711 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.534622 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.554624 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.574026 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.597630 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.614690 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.634719 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.655251 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.674796 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.714574 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.735511 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.755584 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.774574 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.795738 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.815036 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.836313 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.855028 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.886556 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.894648 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.915261 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.934897 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.954726 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.976198 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:16:51 crc kubenswrapper[4751]: I0130 21:16:51.995143 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.015364 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.034693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.061084 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.075019 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.095754 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.114579 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.136770 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.155137 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.175686 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.195315 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.215354 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.234174 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.274897 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.293409 4751 request.go:700] Waited for 1.00109133s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.295362 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.314721 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.334718 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.354942 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.375281 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.395682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.427554 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.435837 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.455766 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.474988 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.495760 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.515604 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.535087 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.555440 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.576072 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.595264 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.614839 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.635202 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.655687 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.675279 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.695069 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.715456 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.734839 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.755723 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.775679 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.795532 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.815523 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.835149 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.855451 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.876709 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.896654 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.914725 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.935833 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.954730 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:16:52 crc kubenswrapper[4751]: I0130 21:16:52.974346 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.022752 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w78rj\" (UniqueName: \"kubernetes.io/projected/80b9c760-3c34-42cb-bb23-1f11dad50e58-kube-api-access-w78rj\") pod \"openshift-config-operator-7777fb866f-x98hg\" (UID: \"80b9c760-3c34-42cb-bb23-1f11dad50e58\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.044205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppkm\" (UniqueName: \"kubernetes.io/projected/6d872f03-d4d0-49bc-9758-05060035dafa-kube-api-access-gppkm\") pod \"apiserver-76f77b778f-plkp9\" (UID: \"6d872f03-d4d0-49bc-9758-05060035dafa\") " pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.068980 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"route-controller-manager-6576b87f9c-8z9vp\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.086908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmck\" (UniqueName: \"kubernetes.io/projected/5c9671c2-84f9-4719-b497-4fa77803105b-kube-api-access-9pmck\") pod \"authentication-operator-69f744f599-9wvms\" (UID: \"5c9671c2-84f9-4719-b497-4fa77803105b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.099030 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.101312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"controller-manager-879f6c89f-8jsqt\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.121833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vh8\" (UniqueName: \"kubernetes.io/projected/ebb4c857-4f54-440f-81d7-74eadc588099-kube-api-access-t4vh8\") pod \"machine-api-operator-5694c8668f-nk5rn\" (UID: \"ebb4c857-4f54-440f-81d7-74eadc588099\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.143303 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjkx\" (UniqueName: \"kubernetes.io/projected/21dc9dc0-702d-49a7-baed-f8e70f6867f3-kube-api-access-mpjkx\") pod \"downloads-7954f5f757-8l2v5\" (UID: \"21dc9dc0-702d-49a7-baed-f8e70f6867f3\") " pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.157306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"console-f9d7485db-7bw65\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.192032 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k2d\" (UniqueName: \"kubernetes.io/projected/7b579d29-157b-4ff2-b623-d4af8fd6a8fe-kube-api-access-n8k2d\") pod \"machine-approver-56656f9798-m5g9r\" (UID: \"7b579d29-157b-4ff2-b623-d4af8fd6a8fe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.201809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xpm\" (UniqueName: \"kubernetes.io/projected/b4ed27b8-56e7-4e93-aea6-83adae8affb6-kube-api-access-v7xpm\") pod \"console-operator-58897d9998-6gckm\" (UID: \"b4ed27b8-56e7-4e93-aea6-83adae8affb6\") " pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.215306 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.221476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfsf5\" (UniqueName: \"kubernetes.io/projected/f3639569-5d39-4fa1-863c-45307b3da476-kube-api-access-zfsf5\") pod \"openshift-apiserver-operator-796bbdcf4f-cqk7w\" (UID: \"f3639569-5d39-4fa1-863c-45307b3da476\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.233455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.237874 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.247778 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.255560 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.264209 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.275262 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.294483 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.307656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.312958 4751 request.go:700] Waited for 1.888323945s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.314268 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.322000 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x98hg"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.324544 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.336920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.342304 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b579d29_157b_4ff2_b623_d4af8fd6a8fe.slice/crio-f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722 WatchSource:0}: Error finding container f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722: Status 404 returned error can't find the container with id f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.352702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.354890 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.363650 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b9c760_3c34_42cb_bb23_1f11dad50e58.slice/crio-5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312 WatchSource:0}: Error finding container 5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312: Status 404 returned error can't find the container with id 5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.367858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.374952 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.408023 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.416908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c37a69a-9a13-400f-bfff-0886b6062725-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469220 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-default-certificate\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469343 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c37a69a-9a13-400f-bfff-0886b6062725-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469424 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqzz\" (UniqueName: \"kubernetes.io/projected/9c37a69a-9a13-400f-bfff-0886b6062725-kube-api-access-xrqzz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469484 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fpj\" (UniqueName: \"kubernetes.io/projected/80a7c9c5-51fd-457c-a16b-c7ad90f92811-kube-api-access-s9fpj\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469980 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.469997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aea78078-eab1-4c82-b072-e6b65f959815-audit-dir\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470031 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/542e69b1-7290-4693-b85b-5c9566314a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470050 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470067 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470081 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-etcd-client\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgvv\" (UniqueName: \"kubernetes.io/projected/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-kube-api-access-stgvv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470137 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80a7c9c5-51fd-457c-a16b-c7ad90f92811-metrics-tls\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-client\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470193 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb29a82-8be0-4219-81b1-fecfcb4e1061-service-ca-bundle\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdfzb\" (UniqueName: \"kubernetes.io/projected/af4fa723-4cc5-4fa1-9162-fa20b958fa29-kube-api-access-hdfzb\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-stats-auth\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470341 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfn6c\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-kube-api-access-gfn6c\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470363 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbcq\" (UniqueName: \"kubernetes.io/projected/dfb29a82-8be0-4219-81b1-fecfcb4e1061-kube-api-access-dxbcq\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-encryption-config\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470712 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2nhc\" (UniqueName: \"kubernetes.io/projected/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-kube-api-access-n2nhc\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470937 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.470952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxthv\" (UniqueName: \"kubernetes.io/projected/aea78078-eab1-4c82-b072-e6b65f959815-kube-api-access-kxthv\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.471171 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:53.971159815 +0000 UTC m=+152.716982464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471353 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-service-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471381 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-audit-policies\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-serving-cert\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-metrics-certs\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471572 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471603 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-serving-cert\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471704 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-config\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.471741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/542e69b1-7290-4693-b85b-5c9566314a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.490361 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573704 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573928 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.573988 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj26f\" (UniqueName: \"kubernetes.io/projected/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-kube-api-access-xj26f\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d92565-846d-43a6-92e2-02351fec2f63-config\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2nhc\" (UniqueName: \"kubernetes.io/projected/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-kube-api-access-n2nhc\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxthv\" (UniqueName: \"kubernetes.io/projected/aea78078-eab1-4c82-b072-e6b65f959815-kube-api-access-kxthv\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574099 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-webhook-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574121 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-registration-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clhl7\" (UniqueName: \"kubernetes.io/projected/6167bc7b-37d7-493c-93a9-dda69bedad76-kube-api-access-clhl7\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574160 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-service-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357257a0-2b96-4833-84cb-1c4326c34e61-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574213 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-serving-cert\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-tmpfs\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkk4\" (UniqueName: \"kubernetes.io/projected/8141131d-95f7-4103-bd2d-24630fc8e9b6-kube-api-access-wxkk4\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574283 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-cabundle\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-serving-cert\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574337 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79489-5e6b-421c-8019-b1d5161a0341-config\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d9e5b29-c71c-4129-bd91-ccb81940c815-proxy-tls\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8141131d-95f7-4103-bd2d-24630fc8e9b6-cert\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574397 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-socket-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/542e69b1-7290-4693-b85b-5c9566314a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c37a69a-9a13-400f-bfff-0886b6062725-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aae80f4-df3d-4545-8a9b-5a840e379b65-config\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574505 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-default-certificate\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-srv-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-plugins-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbm8\" (UniqueName: \"kubernetes.io/projected/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-kube-api-access-hhbm8\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574630 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-csi-data-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574660 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574677 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-node-bootstrap-token\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50ebf9b-a11e-47ac-828c-f1858be195d7-trusted-ca\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574718 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fpj\" (UniqueName: \"kubernetes.io/projected/80a7c9c5-51fd-457c-a16b-c7ad90f92811-kube-api-access-s9fpj\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574782 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/542e69b1-7290-4693-b85b-5c9566314a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574816 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjww\" (UniqueName: \"kubernetes.io/projected/236e4954-0baf-4d9e-b36f-eed37707af26-kube-api-access-lbjww\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574900 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-mountpoint-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8944fd86-eca1-4882-896d-1cd3faa4b418-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574947 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20cd63ce-b8cf-45fa-9d89-d917cff2894b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574962 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-kube-api-access-shckb\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574976 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-srv-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.574991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575015 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdfzb\" (UniqueName: \"kubernetes.io/projected/af4fa723-4cc5-4fa1-9162-fa20b958fa29-kube-api-access-hdfzb\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575031 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-stats-auth\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfn6c\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-kube-api-access-gfn6c\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdq8\" (UniqueName: \"kubernetes.io/projected/4d9e5b29-c71c-4129-bd91-ccb81940c815-kube-api-access-5fdq8\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575141 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ttsm\" (UniqueName: \"kubernetes.io/projected/d5d92565-846d-43a6-92e2-02351fec2f63-kube-api-access-5ttsm\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575191 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79489-5e6b-421c-8019-b1d5161a0341-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792m2\" (UniqueName: \"kubernetes.io/projected/20cd63ce-b8cf-45fa-9d89-d917cff2894b-kube-api-access-792m2\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9930c0b-af24-4e39-b8e6-199a40779aff-metrics-tls\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575305 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhw4\" (UniqueName: \"kubernetes.io/projected/357257a0-2b96-4833-84cb-1c4326c34e61-kube-api-access-pmhw4\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575344 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575360 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxbcq\" (UniqueName: \"kubernetes.io/projected/dfb29a82-8be0-4219-81b1-fecfcb4e1061-kube-api-access-dxbcq\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-encryption-config\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575391 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575407 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6cq6\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-kube-api-access-k6cq6\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20cd63ce-b8cf-45fa-9d89-d917cff2894b-proxy-tls\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575451 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575490 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-audit-policies\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jchq\" (UniqueName: \"kubernetes.io/projected/8944fd86-eca1-4882-896d-1cd3faa4b418-kube-api-access-6jchq\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575519 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65e79489-5e6b-421c-8019-b1d5161a0341-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575587 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-metrics-certs\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575601 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvvf\" (UniqueName: \"kubernetes.io/projected/a9930c0b-af24-4e39-b8e6-199a40779aff-kube-api-access-ndvvf\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-config\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9930c0b-af24-4e39-b8e6-199a40779aff-config-volume\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7mz\" (UniqueName: \"kubernetes.io/projected/dd4cdd19-fcd5-4fa7-835b-f2c233746297-kube-api-access-cv7mz\") pod \"migrator-59844c95c7-vkfk8\" (UID: \"dd4cdd19-fcd5-4fa7-835b-f2c233746297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c37a69a-9a13-400f-bfff-0886b6062725-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-profile-collector-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-apiservice-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575791 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-images\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575825 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqzz\" (UniqueName: \"kubernetes.io/projected/9c37a69a-9a13-400f-bfff-0886b6062725-kube-api-access-xrqzz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575842 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50ebf9b-a11e-47ac-828c-f1858be195d7-metrics-tls\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575855 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575910 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aea78078-eab1-4c82-b072-e6b65f959815-audit-dir\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575926 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqqt\" (UniqueName: \"kubernetes.io/projected/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-kube-api-access-4tqqt\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzz4\" (UniqueName: \"kubernetes.io/projected/ac8a7752-ba4b-41eb-a085-b493f6876beb-kube-api-access-kkzz4\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575975 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgvv\" (UniqueName: \"kubernetes.io/projected/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-kube-api-access-stgvv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.575991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576007 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-etcd-client\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80a7c9c5-51fd-457c-a16b-c7ad90f92811-metrics-tls\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-client\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb29a82-8be0-4219-81b1-fecfcb4e1061-service-ca-bundle\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aae80f4-df3d-4545-8a9b-5a840e379b65-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d92565-846d-43a6-92e2-02351fec2f63-serving-cert\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aae80f4-df3d-4545-8a9b-5a840e379b65-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576157 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-key\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.576238 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-certs\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.576372 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.076356422 +0000 UTC m=+152.822179071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.577820 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-service-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.579546 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.580078 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-ca\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.580535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.580855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-serving-cert\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.581091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.585831 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-stats-auth\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.586394 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/542e69b1-7290-4693-b85b-5c9566314a51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.587741 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/542e69b1-7290-4693-b85b-5c9566314a51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588025 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588583 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-etcd-client\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.588890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.589406 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c37a69a-9a13-400f-bfff-0886b6062725-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.589432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c37a69a-9a13-400f-bfff-0886b6062725-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.589730 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb29a82-8be0-4219-81b1-fecfcb4e1061-service-ca-bundle\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.591540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.591942 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aea78078-eab1-4c82-b072-e6b65f959815-audit-dir\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.592143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.592554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-default-certificate\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.594607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.595433 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af4fa723-4cc5-4fa1-9162-fa20b958fa29-config\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.595812 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.598943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfb29a82-8be0-4219-81b1-fecfcb4e1061-metrics-certs\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.599595 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-audit-policies\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.599629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.599848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aea78078-eab1-4c82-b072-e6b65f959815-encryption-config\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.600199 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-serving-cert\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.601589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.601628 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.601886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aea78078-eab1-4c82-b072-e6b65f959815-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.602129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.603084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/af4fa723-4cc5-4fa1-9162-fa20b958fa29-etcd-client\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.604014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.604128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.605029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.605075 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.610517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.612977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.613036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80a7c9c5-51fd-457c-a16b-c7ad90f92811-metrics-tls\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.613065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.615412 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.615861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.627005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2nhc\" (UniqueName: \"kubernetes.io/projected/8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6-kube-api-access-n2nhc\") pod \"cluster-samples-operator-665b6dd947-chjdb\" (UID: \"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.650784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxthv\" (UniqueName: \"kubernetes.io/projected/aea78078-eab1-4c82-b072-e6b65f959815-kube-api-access-kxthv\") pod \"apiserver-7bbb656c7d-8wjm9\" (UID: \"aea78078-eab1-4c82-b072-e6b65f959815\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.667727 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"oauth-openshift-558db77b4-6dcxn\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.675038 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-plkp9"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-tmpfs\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkk4\" (UniqueName: \"kubernetes.io/projected/8141131d-95f7-4103-bd2d-24630fc8e9b6-kube-api-access-wxkk4\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676899 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-cabundle\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8141131d-95f7-4103-bd2d-24630fc8e9b6-cert\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676928 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-socket-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676944 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79489-5e6b-421c-8019-b1d5161a0341-config\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d9e5b29-c71c-4129-bd91-ccb81940c815-proxy-tls\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aae80f4-df3d-4545-8a9b-5a840e379b65-config\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.676997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-plugins-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-srv-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhbm8\" (UniqueName: \"kubernetes.io/projected/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-kube-api-access-hhbm8\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-csi-data-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677070 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50ebf9b-a11e-47ac-828c-f1858be195d7-trusted-ca\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677084 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-node-bootstrap-token\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbjww\" (UniqueName: \"kubernetes.io/projected/236e4954-0baf-4d9e-b36f-eed37707af26-kube-api-access-lbjww\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677134 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-mountpoint-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677148 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8944fd86-eca1-4882-896d-1cd3faa4b418-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20cd63ce-b8cf-45fa-9d89-d917cff2894b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677196 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-kube-api-access-shckb\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-srv-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677249 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdq8\" (UniqueName: \"kubernetes.io/projected/4d9e5b29-c71c-4129-bd91-ccb81940c815-kube-api-access-5fdq8\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ttsm\" (UniqueName: \"kubernetes.io/projected/d5d92565-846d-43a6-92e2-02351fec2f63-kube-api-access-5ttsm\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677342 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-tmpfs\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9930c0b-af24-4e39-b8e6-199a40779aff-metrics-tls\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhw4\" (UniqueName: \"kubernetes.io/projected/357257a0-2b96-4833-84cb-1c4326c34e61-kube-api-access-pmhw4\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79489-5e6b-421c-8019-b1d5161a0341-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792m2\" (UniqueName: \"kubernetes.io/projected/20cd63ce-b8cf-45fa-9d89-d917cff2894b-kube-api-access-792m2\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677429 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6cq6\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-kube-api-access-k6cq6\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20cd63ce-b8cf-45fa-9d89-d917cff2894b-proxy-tls\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677460 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jchq\" (UniqueName: \"kubernetes.io/projected/8944fd86-eca1-4882-896d-1cd3faa4b418-kube-api-access-6jchq\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65e79489-5e6b-421c-8019-b1d5161a0341-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677522 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvvf\" (UniqueName: \"kubernetes.io/projected/a9930c0b-af24-4e39-b8e6-199a40779aff-kube-api-access-ndvvf\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9930c0b-af24-4e39-b8e6-199a40779aff-config-volume\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677552 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv7mz\" (UniqueName: \"kubernetes.io/projected/dd4cdd19-fcd5-4fa7-835b-f2c233746297-kube-api-access-cv7mz\") pod \"migrator-59844c95c7-vkfk8\" (UID: \"dd4cdd19-fcd5-4fa7-835b-f2c233746297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677590 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-profile-collector-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-apiservice-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-images\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677653 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50ebf9b-a11e-47ac-828c-f1858be195d7-metrics-tls\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqqt\" (UniqueName: \"kubernetes.io/projected/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-kube-api-access-4tqqt\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzz4\" (UniqueName: \"kubernetes.io/projected/ac8a7752-ba4b-41eb-a085-b493f6876beb-kube-api-access-kkzz4\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677704 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aae80f4-df3d-4545-8a9b-5a840e379b65-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d92565-846d-43a6-92e2-02351fec2f63-serving-cert\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aae80f4-df3d-4545-8a9b-5a840e379b65-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-key\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-certs\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677807 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj26f\" (UniqueName: \"kubernetes.io/projected/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-kube-api-access-xj26f\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d92565-846d-43a6-92e2-02351fec2f63-config\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677852 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357257a0-2b96-4833-84cb-1c4326c34e61-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677866 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-webhook-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-registration-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.677898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clhl7\" (UniqueName: \"kubernetes.io/projected/6167bc7b-37d7-493c-93a9-dda69bedad76-kube-api-access-clhl7\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.678310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.678424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-mountpoint-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.679477 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20cd63ce-b8cf-45fa-9d89-d917cff2894b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.679711 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50ebf9b-a11e-47ac-828c-f1858be195d7-trusted-ca\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.679780 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-csi-data-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.680029 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.18001934 +0000 UTC m=+152.925841989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.680652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.680667 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-plugins-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.680939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.681309 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-socket-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.681535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e79489-5e6b-421c-8019-b1d5161a0341-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.681606 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.682229 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9930c0b-af24-4e39-b8e6-199a40779aff-config-volume\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d92565-846d-43a6-92e2-02351fec2f63-config\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683256 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-srv-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683485 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e79489-5e6b-421c-8019-b1d5161a0341-config\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683750 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4d9e5b29-c71c-4129-bd91-ccb81940c815-images\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.683792 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9930c0b-af24-4e39-b8e6-199a40779aff-metrics-tls\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.684861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-cabundle\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.684933 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8944fd86-eca1-4882-896d-1cd3faa4b418-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.686784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-node-bootstrap-token\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.687349 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.687729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.688118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aae80f4-df3d-4545-8a9b-5a840e379b65-config\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.688148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac8a7752-ba4b-41eb-a085-b493f6876beb-registration-dir\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.688560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/236e4954-0baf-4d9e-b36f-eed37707af26-certs\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.689057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.689101 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.689620 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8141131d-95f7-4103-bd2d-24630fc8e9b6-cert\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690483 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20cd63ce-b8cf-45fa-9d89-d917cff2894b-proxy-tls\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-profile-collector-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-webhook-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.690925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a50ebf9b-a11e-47ac-828c-f1858be195d7-metrics-tls\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.691061 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-apiservice-cert\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.691265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4d9e5b29-c71c-4129-bd91-ccb81940c815-proxy-tls\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.692951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d92565-846d-43a6-92e2-02351fec2f63-serving-cert\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aae80f4-df3d-4545-8a9b-5a840e379b65-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357257a0-2b96-4833-84cb-1c4326c34e61-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.693729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-signing-key\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.699785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-srv-cert\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.700192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6167bc7b-37d7-493c-93a9-dda69bedad76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.715866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.716815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8l2v5"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.722813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.727611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdfzb\" (UniqueName: \"kubernetes.io/projected/af4fa723-4cc5-4fa1-9162-fa20b958fa29-kube-api-access-hdfzb\") pod \"etcd-operator-b45778765-zpvhg\" (UID: \"af4fa723-4cc5-4fa1-9162-fa20b958fa29\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.735865 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.736097 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.736948 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-nk5rn"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.747806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxbcq\" (UniqueName: \"kubernetes.io/projected/dfb29a82-8be0-4219-81b1-fecfcb4e1061-kube-api-access-dxbcq\") pod \"router-default-5444994796-w42cs\" (UID: \"dfb29a82-8be0-4219-81b1-fecfcb4e1061\") " pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.752080 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gckm"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.771259 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgvv\" (UniqueName: \"kubernetes.io/projected/555517ab-ec2d-4534-8cc4-3ecbcdda7a1b-kube-api-access-stgvv\") pod \"openshift-controller-manager-operator-756b6f6bc6-zbt4w\" (UID: \"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.779204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.779379 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.279352827 +0000 UTC m=+153.025175486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.779551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.779848 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.27983579 +0000 UTC m=+153.025658459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.789695 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.803189 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" event={"ID":"ebb4c857-4f54-440f-81d7-74eadc588099","Type":"ContainerStarted","Data":"a2b4482621e2d1c6e88455c5d82d8284f2be0785d2d69e48d1c8409293eefb8c"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.807131 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.810150 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.810588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.814674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerStarted","Data":"6b42ab36cab01aee98a06d95fe6c40dede3332d8845f2cd0513f8746b8ab4a01"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.816268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fpj\" (UniqueName: \"kubernetes.io/projected/80a7c9c5-51fd-457c-a16b-c7ad90f92811-kube-api-access-s9fpj\") pod \"dns-operator-744455d44c-mp5g5\" (UID: \"80a7c9c5-51fd-457c-a16b-c7ad90f92811\") " pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.821711 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" event={"ID":"7b579d29-157b-4ff2-b623-d4af8fd6a8fe","Type":"ContainerStarted","Data":"d55e421aea0b26254543c3385167bdd9b2b3bddbed4b1e081fd8b94454d8f350"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.821748 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" event={"ID":"7b579d29-157b-4ff2-b623-d4af8fd6a8fe","Type":"ContainerStarted","Data":"f6f99673aa6bb0aad2f3c4036b7f5f6b83610d6ebe270ac0c27a7c0d5e533722"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.825093 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.825436 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8l2v5" event={"ID":"21dc9dc0-702d-49a7-baed-f8e70f6867f3","Type":"ContainerStarted","Data":"5a76b203cc9771016d91d3bc5cdeac6be80b178bf079cbd548e925d6b80782e8"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.825723 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.828091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerStarted","Data":"bcc3e35fa7bf77d352470a19ce3b00e0ae26473ecc7d562f4aa3b014710b8b83"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829508 4751 generic.go:334] "Generic (PLEG): container finished" podID="80b9c760-3c34-42cb-bb23-1f11dad50e58" containerID="ecb17adc0b077bbd204ae1ae355d34b0117514749487f0933b8b0674bb22cc23" exitCode=0 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" event={"ID":"80b9c760-3c34-42cb-bb23-1f11dad50e58","Type":"ContainerDied","Data":"ecb17adc0b077bbd204ae1ae355d34b0117514749487f0933b8b0674bb22cc23"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" event={"ID":"80b9c760-3c34-42cb-bb23-1f11dad50e58","Type":"ContainerStarted","Data":"5d290ac6257102448a66da1338c6ff7602f0050bddbafd5f5dfec0384a0c4312"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.829829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfn6c\" (UniqueName: \"kubernetes.io/projected/542e69b1-7290-4693-b85b-5c9566314a51-kube-api-access-gfn6c\") pod \"cluster-image-registry-operator-dc59b4c8b-p6hjc\" (UID: \"542e69b1-7290-4693-b85b-5c9566314a51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.830970 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gckm" event={"ID":"b4ed27b8-56e7-4e93-aea6-83adae8affb6","Type":"ContainerStarted","Data":"05f6960d4109556e39ddd40a564de7a622a0e36a81ed53a0885707eebe6cf349"} Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.843756 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.853753 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.856175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqzz\" (UniqueName: \"kubernetes.io/projected/9c37a69a-9a13-400f-bfff-0886b6062725-kube-api-access-xrqzz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4x88q\" (UID: \"9c37a69a-9a13-400f-bfff-0886b6062725\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.856299 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.864179 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod322809f5_4f4c_487e_8488_6c62bac86f8f.slice/crio-f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3 WatchSource:0}: Error finding container f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3: Status 404 returned error can't find the container with id f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3 Jan 30 21:16:53 crc kubenswrapper[4751]: W0130 21:16:53.864467 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3639569_5d39_4fa1_863c_45307b3da476.slice/crio-6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31 WatchSource:0}: Error finding container 6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31: Status 404 returned error can't find the container with id 6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31 Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.872051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkk4\" (UniqueName: \"kubernetes.io/projected/8141131d-95f7-4103-bd2d-24630fc8e9b6-kube-api-access-wxkk4\") pod \"ingress-canary-m2zrs\" (UID: \"8141131d-95f7-4103-bd2d-24630fc8e9b6\") " pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.872579 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.873568 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9wvms"] Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.888661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.888816 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.388791961 +0000 UTC m=+153.134614610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.889013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.889640 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.389630293 +0000 UTC m=+153.135452942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.893564 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhl7\" (UniqueName: \"kubernetes.io/projected/6167bc7b-37d7-493c-93a9-dda69bedad76-kube-api-access-clhl7\") pod \"olm-operator-6b444d44fb-z2l88\" (UID: \"6167bc7b-37d7-493c-93a9-dda69bedad76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.907444 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792m2\" (UniqueName: \"kubernetes.io/projected/20cd63ce-b8cf-45fa-9d89-d917cff2894b-kube-api-access-792m2\") pod \"machine-config-controller-84d6567774-99fpp\" (UID: \"20cd63ce-b8cf-45fa-9d89-d917cff2894b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.930933 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6cq6\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-kube-api-access-k6cq6\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.948613 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbjww\" (UniqueName: \"kubernetes.io/projected/236e4954-0baf-4d9e-b36f-eed37707af26-kube-api-access-lbjww\") pod \"machine-config-server-2hvtm\" (UID: \"236e4954-0baf-4d9e-b36f-eed37707af26\") " pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.961972 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.971241 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m2zrs" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.982614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhbm8\" (UniqueName: \"kubernetes.io/projected/130dae88-caa3-4e75-b3fc-d3b6dcd5b577-kube-api-access-hhbm8\") pod \"catalog-operator-68c6474976-mw25p\" (UID: \"130dae88-caa3-4e75-b3fc-d3b6dcd5b577\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.983910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2hvtm" Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.989813 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.990478 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqqt\" (UniqueName: \"kubernetes.io/projected/c79ee24d-6dc8-4c72-911a-9ea6810a9f9a-kube-api-access-4tqqt\") pod \"service-ca-9c57cc56f-zzk29\" (UID: \"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a\") " pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:53 crc kubenswrapper[4751]: E0130 21:16:53.990566 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.490553761 +0000 UTC m=+153.236376410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:53 crc kubenswrapper[4751]: I0130 21:16:53.999183 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.009713 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jchq\" (UniqueName: \"kubernetes.io/projected/8944fd86-eca1-4882-896d-1cd3faa4b418-kube-api-access-6jchq\") pod \"package-server-manager-789f6589d5-hmjv6\" (UID: \"8944fd86-eca1-4882-896d-1cd3faa4b418\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.039512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65e79489-5e6b-421c-8019-b1d5161a0341-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ffw4d\" (UID: \"65e79489-5e6b-421c-8019-b1d5161a0341\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.044478 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.054017 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50ebf9b-a11e-47ac-828c-f1858be195d7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n94s8\" (UID: \"a50ebf9b-a11e-47ac-828c-f1858be195d7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.069241 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdq8\" (UniqueName: \"kubernetes.io/projected/4d9e5b29-c71c-4129-bd91-ccb81940c815-kube-api-access-5fdq8\") pod \"machine-config-operator-74547568cd-tc4zf\" (UID: \"4d9e5b29-c71c-4129-bd91-ccb81940c815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.093217 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"collect-profiles-29496795-lg25p\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.094070 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.094581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.094895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.594884565 +0000 UTC m=+153.340707214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.116119 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ttsm\" (UniqueName: \"kubernetes.io/projected/d5d92565-846d-43a6-92e2-02351fec2f63-kube-api-access-5ttsm\") pod \"service-ca-operator-777779d784-qthvh\" (UID: \"d5d92565-846d-43a6-92e2-02351fec2f63\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.128605 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.128639 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.132708 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvvf\" (UniqueName: \"kubernetes.io/projected/a9930c0b-af24-4e39-b8e6-199a40779aff-kube-api-access-ndvvf\") pod \"dns-default-hrfwj\" (UID: \"a9930c0b-af24-4e39-b8e6-199a40779aff\") " pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.139720 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.152513 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhw4\" (UniqueName: \"kubernetes.io/projected/357257a0-2b96-4833-84cb-1c4326c34e61-kube-api-access-pmhw4\") pod \"control-plane-machine-set-operator-78cbb6b69f-xf2m8\" (UID: \"357257a0-2b96-4833-84cb-1c4326c34e61\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.169986 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" Jan 30 21:16:54 crc kubenswrapper[4751]: W0130 21:16:54.173735 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod236e4954_0baf_4d9e_b36f_eed37707af26.slice/crio-6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493 WatchSource:0}: Error finding container 6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493: Status 404 returned error can't find the container with id 6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493 Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.180969 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"marketplace-operator-79b997595-tr6kv\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.182339 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.190766 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.197076 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzz4\" (UniqueName: \"kubernetes.io/projected/ac8a7752-ba4b-41eb-a085-b493f6876beb-kube-api-access-kkzz4\") pod \"csi-hostpathplugin-tw9q7\" (UID: \"ac8a7752-ba4b-41eb-a085-b493f6876beb\") " pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.197247 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.197583 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.198169 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.698149093 +0000 UTC m=+153.443971742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.207035 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.217066 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aae80f4-df3d-4545-8a9b-5a840e379b65-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xz8vz\" (UID: \"7aae80f4-df3d-4545-8a9b-5a840e379b65\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.217245 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.222834 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.232552 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.242649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/c1f866c2-11a6-4c9b-8d42-54e5f0a18195-kube-api-access-shckb\") pod \"multus-admission-controller-857f4d67dd-l4lnd\" (UID: \"c1f866c2-11a6-4c9b-8d42-54e5f0a18195\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.246801 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.256627 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.258377 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv7mz\" (UniqueName: \"kubernetes.io/projected/dd4cdd19-fcd5-4fa7-835b-f2c233746297-kube-api-access-cv7mz\") pod \"migrator-59844c95c7-vkfk8\" (UID: \"dd4cdd19-fcd5-4fa7-835b-f2c233746297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.268830 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mp5g5"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.287046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.294293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a2068cc-b08f-467a-aaf9-a3bbfd99511d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-njw5m\" (UID: \"3a2068cc-b08f-467a-aaf9-a3bbfd99511d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.298423 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.299234 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.299562 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.799551633 +0000 UTC m=+153.545374282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.302989 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zpvhg"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.318113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj26f\" (UniqueName: \"kubernetes.io/projected/cabe4321-bcb6-4d9e-905f-ab26bbc11b86-kube-api-access-xj26f\") pod \"packageserver-d55dfcdfc-nldk6\" (UID: \"cabe4321-bcb6-4d9e-905f-ab26bbc11b86\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.325360 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.399860 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.400304 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:54.900283075 +0000 UTC m=+153.646105724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.405799 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.433005 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.450107 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.463733 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.477586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.502416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.502877 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.002867266 +0000 UTC m=+153.748689915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.537756 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.554523 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m2zrs"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.554885 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.603608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.603869 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.103855125 +0000 UTC m=+153.849677774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.621465 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.704464 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.706960 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.206922407 +0000 UTC m=+153.952745056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.782729 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.782776 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.791180 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.806071 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.806219 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.306198853 +0000 UTC m=+154.052021502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.806309 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.806602 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.306594903 +0000 UTC m=+154.052417552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.863563 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" event={"ID":"5c9671c2-84f9-4719-b497-4fa77803105b","Type":"ContainerStarted","Data":"22d4e0441404579c411a624d8d750283c4e57965f2a96bf423b811ee07efd8db"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.863615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" event={"ID":"5c9671c2-84f9-4719-b497-4fa77803105b","Type":"ContainerStarted","Data":"9a7aae1f22c94a94fa3580fe84ad185240e004851381ce969903a5a4d6e1f1b2"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.873834 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerStarted","Data":"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.874059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerStarted","Data":"f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.874075 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.885485 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerStarted","Data":"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.885877 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.899838 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8z9vp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.899900 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.903887 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8jsqt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.903919 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.909009 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:54 crc kubenswrapper[4751]: E0130 21:16:54.909401 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.409382999 +0000 UTC m=+154.155205648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.910126 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.931496 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.945172 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8"] Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.949447 4751 csr.go:261] certificate signing request csr-bgs9s is approved, waiting to be issued Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.953055 4751 csr.go:257] certificate signing request csr-bgs9s is issued Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.991023 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" event={"ID":"65e79489-5e6b-421c-8019-b1d5161a0341","Type":"ContainerStarted","Data":"d864ac0bcc076ec8c9685884875686a30eb7c70458a350fc5884fec9ca43f99f"} Jan 30 21:16:54 crc kubenswrapper[4751]: I0130 21:16:54.995576 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2hvtm" event={"ID":"236e4954-0baf-4d9e-b36f-eed37707af26","Type":"ContainerStarted","Data":"6d35de23ffaea959ae81b921d830d837f52c056848673f50ac0e8d34bbebe493"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.010879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.015636 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.515619072 +0000 UTC m=+154.261441721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.016986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" event={"ID":"f3639569-5d39-4fa1-863c-45307b3da476","Type":"ContainerStarted","Data":"79a21c75c107c4f2158bf7e172ea407be9cd48b91f6ed6fa8d70cbccfdcb656a"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.017027 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" event={"ID":"f3639569-5d39-4fa1-863c-45307b3da476","Type":"ContainerStarted","Data":"6c52c1884413cb9641c5954d9df153e6466c3979591a5be6ee9c86633ba98a31"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.029772 4751 generic.go:334] "Generic (PLEG): container finished" podID="aea78078-eab1-4c82-b072-e6b65f959815" containerID="b94b03a7192fe7d8804d522deb3ec025698483a8fc2ef870057c74c3995d16f8" exitCode=0 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.029885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" event={"ID":"aea78078-eab1-4c82-b072-e6b65f959815","Type":"ContainerDied","Data":"b94b03a7192fe7d8804d522deb3ec025698483a8fc2ef870057c74c3995d16f8"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.029911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" event={"ID":"aea78078-eab1-4c82-b072-e6b65f959815","Type":"ContainerStarted","Data":"a44527cbe58f7fee1ceaf223fce522274d1ff8197428c02cc745a0767403f1ec"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.050824 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" event={"ID":"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6","Type":"ContainerStarted","Data":"c9b287da83607151b9420b2586281c9669ffeccabc1f7b5fd6fc311facbd6eb9"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.061277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" event={"ID":"9c37a69a-9a13-400f-bfff-0886b6062725","Type":"ContainerStarted","Data":"9a7afff7044e5e85896ce64d7793e274ce92f64d4888f911c586ba802b52ab29"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.066487 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" event={"ID":"80b9c760-3c34-42cb-bb23-1f11dad50e58","Type":"ContainerStarted","Data":"1a7879abe96eabbeb776b4634c7ae0e1936702b25f911824e0f364bfc112d5dc"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.066905 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.068707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gckm" event={"ID":"b4ed27b8-56e7-4e93-aea6-83adae8affb6","Type":"ContainerStarted","Data":"aca02e92ad644ec3adac0a27ce5b841b690fa8614def927cae46ab70d4f6b7cb"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.069111 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.074108 4751 generic.go:334] "Generic (PLEG): container finished" podID="6d872f03-d4d0-49bc-9758-05060035dafa" containerID="4e83864f2464e79ff40daf58aed55d583ac6fc82aa4375e098a27d4341cf6206" exitCode=0 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.074467 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerDied","Data":"4e83864f2464e79ff40daf58aed55d583ac6fc82aa4375e098a27d4341cf6206"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.073836 4751 patch_prober.go:28] interesting pod/console-operator-58897d9998-6gckm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.075863 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6gckm" podUID="b4ed27b8-56e7-4e93-aea6-83adae8affb6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.080084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerStarted","Data":"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.080114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerStarted","Data":"d63914e011c114b25558640a8b61cb4256ca45025b1be36724b2e0af5265302e"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.084640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" event={"ID":"af4fa723-4cc5-4fa1-9162-fa20b958fa29","Type":"ContainerStarted","Data":"f9b089997389b30f379a8257f5286fe8d62441d79ee23bede67069d166514437"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.086468 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" event={"ID":"80a7c9c5-51fd-457c-a16b-c7ad90f92811","Type":"ContainerStarted","Data":"5c6ff82948286b3cc8f625bf9d46258005421cb15eed0158df2a550a684ab697"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.088098 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8l2v5" event={"ID":"21dc9dc0-702d-49a7-baed-f8e70f6867f3","Type":"ContainerStarted","Data":"8dc25416ce0431f51bfd20b5e06b1682347e5539e22a7b4fc7e753265b6fc033"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.088531 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096415 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-w42cs" event={"ID":"dfb29a82-8be0-4219-81b1-fecfcb4e1061","Type":"ContainerStarted","Data":"f6ac8436f1fdf1f1406416501a90fbf4198d690b7d48a5884450e4eb3ebfdac1"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-w42cs" event={"ID":"dfb29a82-8be0-4219-81b1-fecfcb4e1061","Type":"ContainerStarted","Data":"a6144931aee6b55595d67ead250bebeb7cd485c34dfa13319162d68a8ced2c29"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096808 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.096850 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.111387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.112397 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.612381683 +0000 UTC m=+154.358204332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.113664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" event={"ID":"7b579d29-157b-4ff2-b623-d4af8fd6a8fe","Type":"ContainerStarted","Data":"f7efaa07b3d37ec3362704b99d115e2d1605bd1b2ac82ae46e09fd80b6304048"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.118487 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" event={"ID":"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b","Type":"ContainerStarted","Data":"b0be84163e14aa4348caefe04d4691c0deb5d7eaec59b2fb21fdf5c719b3d810"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.159199 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerStarted","Data":"85f9f12a183ee9ac32edf469f266b83c69141757b64a96e9390b64f35e4d5e44"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.164571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" event={"ID":"6167bc7b-37d7-493c-93a9-dda69bedad76","Type":"ContainerStarted","Data":"043c1985b11074e7b3354ea056667e2302b6db796cdceab01b534bf7c69daf2f"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.167303 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m2zrs" event={"ID":"8141131d-95f7-4103-bd2d-24630fc8e9b6","Type":"ContainerStarted","Data":"5b8d1f346d32a1b76defc440996ae2e761261c841faaff7479949fc80899404b"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.173199 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" event={"ID":"ebb4c857-4f54-440f-81d7-74eadc588099","Type":"ContainerStarted","Data":"612f2d6977f92b53466e960bb99d479304cdfbb8a53532347c3c91d7e97452e5"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.173264 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" event={"ID":"ebb4c857-4f54-440f-81d7-74eadc588099","Type":"ContainerStarted","Data":"522d99a2868e8fb58ae72c360b2455cb8b41c33a48074ae5c988e629653b1ce0"} Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.194812 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.194854 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tw9q7"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.197097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zzk29"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.212965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.214610 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.714595704 +0000 UTC m=+154.460418353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.284605 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5243a1a5_2eaa_4437_b10e_602439c7c838.slice/crio-645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d WatchSource:0}: Error finding container 645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d: Status 404 returned error can't find the container with id 645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.315320 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.315445 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.81542893 +0000 UTC m=+154.561251579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.315663 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.316667 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.81665053 +0000 UTC m=+154.562473179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.394534 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.409556 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hrfwj"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.417410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.417590 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.917565258 +0000 UTC m=+154.663387907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.417767 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.418150 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:55.918142592 +0000 UTC m=+154.663965241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.420467 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8944fd86_eca1_4882_896d_1cd3faa4b418.slice/crio-80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118 WatchSource:0}: Error finding container 80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118: Status 404 returned error can't find the container with id 80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.474652 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l4lnd"] Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.481702 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9930c0b_af24_4e39_b8e6_199a40779aff.slice/crio-8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b WatchSource:0}: Error finding container 8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b: Status 404 returned error can't find the container with id 8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.488496 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.492538 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.494063 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.519376 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.521698 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.019568984 +0000 UTC m=+154.765391633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.521744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.522269 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.022258492 +0000 UTC m=+154.768081141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.535594 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2068cc_b08f_467a_aaf9_a3bbfd99511d.slice/crio-f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b WatchSource:0}: Error finding container f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b: Status 404 returned error can't find the container with id f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.543567 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357257a0_2b96_4833_84cb_1c4326c34e61.slice/crio-19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591 WatchSource:0}: Error finding container 19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591: Status 404 returned error can't find the container with id 19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591 Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.545532 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcabe4321_bcb6_4d9e_905f_ab26bbc11b86.slice/crio-cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e WatchSource:0}: Error finding container cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e: Status 404 returned error can't find the container with id cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.589494 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.609470 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.614978 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qthvh"] Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.622930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.623426 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.123410306 +0000 UTC m=+154.869232955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: W0130 21:16:55.670063 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd4cdd19_fcd5_4fa7_835b_f2c233746297.slice/crio-5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795 WatchSource:0}: Error finding container 5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795: Status 404 returned error can't find the container with id 5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795 Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.724040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.724352 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.224341124 +0000 UTC m=+154.970163773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.825913 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.826430 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.32641601 +0000 UTC m=+155.072238659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.827669 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.858499 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:55 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:55 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:55 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.858546 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.930311 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:55 crc kubenswrapper[4751]: E0130 21:16:55.930895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.430883568 +0000 UTC m=+155.176706217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.956668 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 21:11:54 +0000 UTC, rotation deadline is 2026-11-06 20:11:49.470465005 +0000 UTC Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.956722 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6718h54m53.513745139s for next certificate rotation Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.969804 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" podStartSLOduration=127.969786988 podStartE2EDuration="2m7.969786988s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:55.942541275 +0000 UTC m=+154.688363924" watchObservedRunningTime="2026-01-30 21:16:55.969786988 +0000 UTC m=+154.715609637" Jan 30 21:16:55 crc kubenswrapper[4751]: I0130 21:16:55.977273 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7bw65" podStartSLOduration=127.977255168 podStartE2EDuration="2m7.977255168s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:55.96947238 +0000 UTC m=+154.715295029" watchObservedRunningTime="2026-01-30 21:16:55.977255168 +0000 UTC m=+154.723077817" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.007644 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" podStartSLOduration=127.007628221 podStartE2EDuration="2m7.007628221s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.006451571 +0000 UTC m=+154.752274220" watchObservedRunningTime="2026-01-30 21:16:56.007628221 +0000 UTC m=+154.753450870" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.066455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.066788 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.566773046 +0000 UTC m=+155.312595695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.116758 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cqk7w" podStartSLOduration=128.116736067 podStartE2EDuration="2m8.116736067s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.116357578 +0000 UTC m=+154.862180227" watchObservedRunningTime="2026-01-30 21:16:56.116736067 +0000 UTC m=+154.862558716" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.148695 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9wvms" podStartSLOduration=128.14868071 podStartE2EDuration="2m8.14868071s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.147654634 +0000 UTC m=+154.893477283" watchObservedRunningTime="2026-01-30 21:16:56.14868071 +0000 UTC m=+154.894503359" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.171608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.171926 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.67191302 +0000 UTC m=+155.417735669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.225722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" event={"ID":"aea78078-eab1-4c82-b072-e6b65f959815","Type":"ContainerStarted","Data":"e40e173cc0f51a6b097e514b59afc1940f0317e68c359b09cbfe3bf288df4d30"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.228870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" event={"ID":"542e69b1-7290-4693-b85b-5c9566314a51","Type":"ContainerStarted","Data":"1844b4d988b49509adb40ff75d4d970ed70ebfe468e66c881ff3687c883f5a60"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.228921 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" event={"ID":"542e69b1-7290-4693-b85b-5c9566314a51","Type":"ContainerStarted","Data":"9494c4fdb0e6e31f5baf4b51551b2df0577ec8cd54500dab8c9b232d142be352"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.235366 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" event={"ID":"dd4cdd19-fcd5-4fa7-835b-f2c233746297","Type":"ContainerStarted","Data":"5f65da52284fb7569623f0439e2b42fd9889ef7afc3035696cce46a3f818d795"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.242858 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" podStartSLOduration=128.242842446 podStartE2EDuration="2m8.242842446s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.241920062 +0000 UTC m=+154.987742711" watchObservedRunningTime="2026-01-30 21:16:56.242842446 +0000 UTC m=+154.988665095" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.262190 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" event={"ID":"357257a0-2b96-4833-84cb-1c4326c34e61","Type":"ContainerStarted","Data":"9643936ded80d717e1c8cabdbac4b86afe7dffa2980b0d30f1cc6e306cb118de"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.262246 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" event={"ID":"357257a0-2b96-4833-84cb-1c4326c34e61","Type":"ContainerStarted","Data":"19c67723388659a8252df5187303e97cd1624a168c40ced3350dea12fcdb2591"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.270225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfwj" event={"ID":"a9930c0b-af24-4e39-b8e6-199a40779aff","Type":"ContainerStarted","Data":"8718a91e9e216a528e7a03a96184b721c4dcf45240361d27a287e5d95477615b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.271894 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6gckm" podStartSLOduration=128.271877884 podStartE2EDuration="2m8.271877884s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.270379497 +0000 UTC m=+155.016202146" watchObservedRunningTime="2026-01-30 21:16:56.271877884 +0000 UTC m=+155.017700533" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.272639 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.273600 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.773584888 +0000 UTC m=+155.519407537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.277635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" event={"ID":"7aae80f4-df3d-4545-8a9b-5a840e379b65","Type":"ContainerStarted","Data":"5a2da5db1537dd1e0caeb21de13dbfe7277a6c921b5656ebef43be3650a0aecf"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.294090 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-w42cs" podStartSLOduration=128.294075899 podStartE2EDuration="2m8.294075899s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.293907215 +0000 UTC m=+155.039729864" watchObservedRunningTime="2026-01-30 21:16:56.294075899 +0000 UTC m=+155.039898538" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.296721 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"88fd38dffccf9a74bc43571a7f00a9c92c0f29620e4e59cc8b050ef11a744029"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.317494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" event={"ID":"4d9e5b29-c71c-4129-bd91-ccb81940c815","Type":"ContainerStarted","Data":"96e1afb3b0419908f40cb60623fd27af3b84f4d53f78c69eddb9b5e5b22e2c35"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.317537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" event={"ID":"4d9e5b29-c71c-4129-bd91-ccb81940c815","Type":"ContainerStarted","Data":"97af7a7c37ea36091f979446aceb409c1476e9f3a415108218fcd95019c623c8"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.334987 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8l2v5" podStartSLOduration=128.33497243 podStartE2EDuration="2m8.33497243s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.334800656 +0000 UTC m=+155.080623305" watchObservedRunningTime="2026-01-30 21:16:56.33497243 +0000 UTC m=+155.080795079" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.348118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m2zrs" event={"ID":"8141131d-95f7-4103-bd2d-24630fc8e9b6","Type":"ContainerStarted","Data":"4c777cc15bebf50d178b32d29256e0b7dd80cf9821f544ec2e234e019d95711a"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.354149 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerStarted","Data":"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.355079 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.374062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.376627 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.876611048 +0000 UTC m=+155.622433697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.385936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" event={"ID":"6167bc7b-37d7-493c-93a9-dda69bedad76","Type":"ContainerStarted","Data":"7a29fc58d45eaf746538e78896087f19c530206258fd49eedc7a7f2a4618055f"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.387747 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.399901 4751 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6dcxn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.399940 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.419505 4751 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z2l88 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.419562 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" podUID="6167bc7b-37d7-493c-93a9-dda69bedad76" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.426830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" event={"ID":"80a7c9c5-51fd-457c-a16b-c7ad90f92811","Type":"ContainerStarted","Data":"f18a9550c2931bce9d53f0bac018af4c356d7b3a3ffa1a84e39a40722656f4c1"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.426888 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" event={"ID":"80a7c9c5-51fd-457c-a16b-c7ad90f92811","Type":"ContainerStarted","Data":"cfcf61badfda05d6791c91d543c6e454cd556a505b5b99f7a6509fecf9ef1b71"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.451007 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-nk5rn" podStartSLOduration=127.450994511 podStartE2EDuration="2m7.450994511s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.399614645 +0000 UTC m=+155.145437284" watchObservedRunningTime="2026-01-30 21:16:56.450994511 +0000 UTC m=+155.196817160" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.467809 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" event={"ID":"cabe4321-bcb6-4d9e-905f-ab26bbc11b86","Type":"ContainerStarted","Data":"cf0e82f604a527d1b471a6f5e1f6957c9a7ed85b1aa22eddd59941f61465a91e"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.482199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.482367 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.982352009 +0000 UTC m=+155.728174658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.482667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.483045 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:56.983038396 +0000 UTC m=+155.728861045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.491003 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-m5g9r" podStartSLOduration=128.490985919 podStartE2EDuration="2m8.490985919s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.451577846 +0000 UTC m=+155.197400495" watchObservedRunningTime="2026-01-30 21:16:56.490985919 +0000 UTC m=+155.236808568" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.491359 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m2zrs" podStartSLOduration=5.491355428 podStartE2EDuration="5.491355428s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.489579463 +0000 UTC m=+155.235402102" watchObservedRunningTime="2026-01-30 21:16:56.491355428 +0000 UTC m=+155.237178077" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.522155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" event={"ID":"c1f866c2-11a6-4c9b-8d42-54e5f0a18195","Type":"ContainerStarted","Data":"00340f6885b4ea09edb9938f2eaf4cb4b1bb4178081dd8c51a8ebeec49aede6b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.540985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" event={"ID":"9c37a69a-9a13-400f-bfff-0886b6062725","Type":"ContainerStarted","Data":"9b04f80bd90892178f203a15d0c0e79a01004082e993249f204ce6be1d333b4f"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.567969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" event={"ID":"20cd63ce-b8cf-45fa-9d89-d917cff2894b","Type":"ContainerStarted","Data":"709372be1327def5f43b7802a83ef2673dfaefa928d2bef73c5b5ac6bc7f6656"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.568263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" event={"ID":"20cd63ce-b8cf-45fa-9d89-d917cff2894b","Type":"ContainerStarted","Data":"c01f8af6fedfd6718dc8ed93869d4e4dd7fb85394b5774e9ceeadd79f093c275"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.568274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" event={"ID":"20cd63ce-b8cf-45fa-9d89-d917cff2894b","Type":"ContainerStarted","Data":"15c959dfeaee5a5289df5b5e9993a2d074e0b2032dc92fdab6515e23455331c4"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.580538 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mp5g5" podStartSLOduration=128.580515787 podStartE2EDuration="2m8.580515787s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.525159249 +0000 UTC m=+155.270981898" watchObservedRunningTime="2026-01-30 21:16:56.580515787 +0000 UTC m=+155.326338436" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.587588 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.590413 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.090375618 +0000 UTC m=+155.836198277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.592076 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2hvtm" event={"ID":"236e4954-0baf-4d9e-b36f-eed37707af26","Type":"ContainerStarted","Data":"03868f220c052b3c6459f165fd26d837b0755e6dbe97d73e57fd6aae5d2df2d7"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.608068 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" event={"ID":"130dae88-caa3-4e75-b3fc-d3b6dcd5b577","Type":"ContainerStarted","Data":"e2cfc2131c1e2622f48bd285e09383a7f2fd25ccf7150f6c4a9e51294735c7f6"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.608113 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" event={"ID":"130dae88-caa3-4e75-b3fc-d3b6dcd5b577","Type":"ContainerStarted","Data":"874322af55667e4c057f700674220f29bd239a5ba05d42ebeade5c54fd297252"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.608915 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.610356 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xf2m8" podStartSLOduration=127.610334896 podStartE2EDuration="2m7.610334896s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.601797389 +0000 UTC m=+155.347620028" watchObservedRunningTime="2026-01-30 21:16:56.610334896 +0000 UTC m=+155.356157545" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.636665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" event={"ID":"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a","Type":"ContainerStarted","Data":"6f766816f316b82da30a3fc3ad8967029a9b6b92bf740e55f8a941bab728a527"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.636711 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" event={"ID":"c79ee24d-6dc8-4c72-911a-9ea6810a9f9a","Type":"ContainerStarted","Data":"221ec0493271241afc9442c42e1ae75b2e2a81236db1a2e00a1a6a38c9fce188"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.649879 4751 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mw25p container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.649927 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" podUID="130dae88-caa3-4e75-b3fc-d3b6dcd5b577" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.659105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" event={"ID":"d5d92565-846d-43a6-92e2-02351fec2f63","Type":"ContainerStarted","Data":"1b74f4ded95aaf79d5704f77cee527595aa4ec83d7d477e864a8293f5ef8f596"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.663386 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" podStartSLOduration=128.663365975 podStartE2EDuration="2m8.663365975s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.659160468 +0000 UTC m=+155.404983117" watchObservedRunningTime="2026-01-30 21:16:56.663365975 +0000 UTC m=+155.409188624" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.694081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" event={"ID":"8944fd86-eca1-4882-896d-1cd3faa4b418","Type":"ContainerStarted","Data":"d9b267654323352bb0251bdfb2dfad2c601b310fe1e8a971b0087271afb9896a"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.694139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" event={"ID":"8944fd86-eca1-4882-896d-1cd3faa4b418","Type":"ContainerStarted","Data":"80507a826eb117e9f84b1fffadaba4ffe071d8b7206e58ed16ef2e608361f118"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.694861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.698747 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p6hjc" podStartSLOduration=128.698725004 podStartE2EDuration="2m8.698725004s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.694520698 +0000 UTC m=+155.440343347" watchObservedRunningTime="2026-01-30 21:16:56.698725004 +0000 UTC m=+155.444547653" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.700437 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.200418137 +0000 UTC m=+155.946240816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.712762 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerStarted","Data":"f19059a58cb3c51eeb050b443db58c71f664e505dd90a7d658b0a494d918d0c7"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.714857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" event={"ID":"af4fa723-4cc5-4fa1-9162-fa20b958fa29","Type":"ContainerStarted","Data":"d7f2abaa8247b2b0f1640d90c2d63f187961c6c0db223265431fa28f47844df5"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.727857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" event={"ID":"65e79489-5e6b-421c-8019-b1d5161a0341","Type":"ContainerStarted","Data":"0f075caa6b96aeea7b26e5dd9903f8cd6841c2c8cc07081290c2d29c12a5bc8a"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.740674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" event={"ID":"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6","Type":"ContainerStarted","Data":"1a8f224bfad774a94e3a431422839cc8cd5a59af5150c7aa3773c22ce268a7d2"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.740738 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" event={"ID":"8c6f08b2-77fe-4bc3-8cd3-370b6a6537e6","Type":"ContainerStarted","Data":"367f0b1c05ba7ca0b98e36f009034070defc27427e06a8c4978262a40c4dfa48"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.747472 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" podStartSLOduration=127.747453294 podStartE2EDuration="2m7.747453294s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.741443352 +0000 UTC m=+155.487266011" watchObservedRunningTime="2026-01-30 21:16:56.747453294 +0000 UTC m=+155.493275943" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.756975 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerStarted","Data":"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.757017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerStarted","Data":"645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.757787 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.759517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" event={"ID":"555517ab-ec2d-4534-8cc4-3ecbcdda7a1b","Type":"ContainerStarted","Data":"9ad457c424e2d83475a43a4241289f140884cfa929ad927ff45d123733d5d732"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.775129 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerStarted","Data":"e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.775475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerStarted","Data":"b6ac56afbe946ed8a3114588a856c9022e503d0feb1988aaa10f041f9dcbf7e4"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780089 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" event={"ID":"a50ebf9b-a11e-47ac-828c-f1858be195d7","Type":"ContainerStarted","Data":"6b42217edb17f579b33e8f78ad708c6bc0d1aa2d441939bea28980480fa3e4b1"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" event={"ID":"a50ebf9b-a11e-47ac-828c-f1858be195d7","Type":"ContainerStarted","Data":"b72d85bf7b25e672b8bd747daf330f9431c1329f8bcf19a5adc8f7d9dffafb40"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780361 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.780392 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.782032 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" event={"ID":"3a2068cc-b08f-467a-aaf9-a3bbfd99511d","Type":"ContainerStarted","Data":"f0e55941fc59c8062ba8f117382b8fcf44abb09f9648c8ecbd8edb156fd3143b"} Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.785269 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.785312 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.793655 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" podStartSLOduration=127.793624139 podStartE2EDuration="2m7.793624139s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.79054823 +0000 UTC m=+155.536370879" watchObservedRunningTime="2026-01-30 21:16:56.793624139 +0000 UTC m=+155.539446788" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.795097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.798693 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.799177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.799391 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.299375265 +0000 UTC m=+156.045197914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.799545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.801853 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.301844978 +0000 UTC m=+156.047667627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.815720 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6gckm" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.831237 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zbt4w" podStartSLOduration=128.831224316 podStartE2EDuration="2m8.831224316s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.830316452 +0000 UTC m=+155.576139101" watchObservedRunningTime="2026-01-30 21:16:56.831224316 +0000 UTC m=+155.577046965" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.838126 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:56 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:56 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:56 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.838191 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.903653 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-99fpp" podStartSLOduration=127.903631988 podStartE2EDuration="2m7.903631988s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.869674783 +0000 UTC m=+155.615497432" watchObservedRunningTime="2026-01-30 21:16:56.903631988 +0000 UTC m=+155.649454637" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.904306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.904427 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.404406158 +0000 UTC m=+156.150228807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.904947 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" podStartSLOduration=127.904939861 podStartE2EDuration="2m7.904939861s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.904206883 +0000 UTC m=+155.650029532" watchObservedRunningTime="2026-01-30 21:16:56.904939861 +0000 UTC m=+155.650762510" Jan 30 21:16:56 crc kubenswrapper[4751]: I0130 21:16:56.906385 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:56 crc kubenswrapper[4751]: E0130 21:16:56.910008 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.40999564 +0000 UTC m=+156.155818289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.013745 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.014203 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podStartSLOduration=128.014192541 podStartE2EDuration="2m8.014192541s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:56.947643157 +0000 UTC m=+155.693465796" watchObservedRunningTime="2026-01-30 21:16:57.014192541 +0000 UTC m=+155.760015190" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.014314 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.514303384 +0000 UTC m=+156.260126033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.014793 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zpvhg" podStartSLOduration=129.014788706 podStartE2EDuration="2m9.014788706s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.011527253 +0000 UTC m=+155.757349902" watchObservedRunningTime="2026-01-30 21:16:57.014788706 +0000 UTC m=+155.760611355" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.081875 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ffw4d" podStartSLOduration=129.081849472 podStartE2EDuration="2m9.081849472s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.050634308 +0000 UTC m=+155.796456957" watchObservedRunningTime="2026-01-30 21:16:57.081849472 +0000 UTC m=+155.827672121" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.082803 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" podStartSLOduration=117.082797077 podStartE2EDuration="1m57.082797077s" podCreationTimestamp="2026-01-30 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.081732459 +0000 UTC m=+155.827555108" watchObservedRunningTime="2026-01-30 21:16:57.082797077 +0000 UTC m=+155.828619726" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.100889 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4x88q" podStartSLOduration=129.100857646 podStartE2EDuration="2m9.100857646s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.097684235 +0000 UTC m=+155.843506884" watchObservedRunningTime="2026-01-30 21:16:57.100857646 +0000 UTC m=+155.846680295" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.115671 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.115917 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.615906988 +0000 UTC m=+156.361729637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.160493 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zzk29" podStartSLOduration=128.160466112 podStartE2EDuration="2m8.160466112s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.15998374 +0000 UTC m=+155.905806389" watchObservedRunningTime="2026-01-30 21:16:57.160466112 +0000 UTC m=+155.906288751" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.162555 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-chjdb" podStartSLOduration=129.162547935 podStartE2EDuration="2m9.162547935s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.125573945 +0000 UTC m=+155.871396594" watchObservedRunningTime="2026-01-30 21:16:57.162547935 +0000 UTC m=+155.908370584" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.183622 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2hvtm" podStartSLOduration=6.183609112 podStartE2EDuration="6.183609112s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.182246257 +0000 UTC m=+155.928068906" watchObservedRunningTime="2026-01-30 21:16:57.183609112 +0000 UTC m=+155.929431761" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.218399 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.218530 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.718510869 +0000 UTC m=+156.464333518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.218626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.218877 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.718868948 +0000 UTC m=+156.464691587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.319368 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.320548 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.820532165 +0000 UTC m=+156.566354814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.426064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.426357 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:57.926344837 +0000 UTC m=+156.672167486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.527289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.527577 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.027561682 +0000 UTC m=+156.773384331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.628319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.628579 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.128566672 +0000 UTC m=+156.874389321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.729426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.729605 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.229579432 +0000 UTC m=+156.975402081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.729659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.729970 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.229960633 +0000 UTC m=+156.975783282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.789571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" event={"ID":"4d9e5b29-c71c-4129-bd91-ccb81940c815","Type":"ContainerStarted","Data":"360186e3f9fbdb7bb5483ae1bbca46098ef76ca719be5753d37c401e01f09c3c"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.791404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" event={"ID":"3a2068cc-b08f-467a-aaf9-a3bbfd99511d","Type":"ContainerStarted","Data":"1e13584c446211f465f2d1ac4a5de34086db78494ff1304dc396037fdb0fe0b1"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.793155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" event={"ID":"dd4cdd19-fcd5-4fa7-835b-f2c233746297","Type":"ContainerStarted","Data":"5b2a4984dfbf89b0ae73e2743675b7a8623b010a8f7576d244f001eeeefbfb9d"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.793183 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" event={"ID":"dd4cdd19-fcd5-4fa7-835b-f2c233746297","Type":"ContainerStarted","Data":"daa4aa43dd5da7fb3132eae020ee2ac3bab4429041936615bd028d8234efa696"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.794665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfwj" event={"ID":"a9930c0b-af24-4e39-b8e6-199a40779aff","Type":"ContainerStarted","Data":"2b380717e2151d03c3e094cfe7519c12e0b58dffabdc518fdac1f169cb3889be"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.794704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfwj" event={"ID":"a9930c0b-af24-4e39-b8e6-199a40779aff","Type":"ContainerStarted","Data":"faff2b0bb4864f702d447f8063fbfcb7134b5d671e7233685154bd9c375804f9"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.794744 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.795731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" event={"ID":"d5d92565-846d-43a6-92e2-02351fec2f63","Type":"ContainerStarted","Data":"113dd0733a0d583b9bf7cd1daf32704bde6dbfc2df9582cc877fd85f2ca4bd07"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.797914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" event={"ID":"6d872f03-d4d0-49bc-9758-05060035dafa","Type":"ContainerStarted","Data":"c3fa6c4e3647efbc9a9c4ba6681fa6e83dba52be62bc5e47071415454e50dc07"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.798892 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"9fe6859ff764b2f41d6144031bd1f679edfc6f8944e225f3b92dfe1871d75e28"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.800333 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" event={"ID":"8944fd86-eca1-4882-896d-1cd3faa4b418","Type":"ContainerStarted","Data":"0e5532a413833208984b0d47d0ea538fb80d0562947d9be06f62fd7a9745fccd"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.800485 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.802302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" event={"ID":"a50ebf9b-a11e-47ac-828c-f1858be195d7","Type":"ContainerStarted","Data":"f831c2bf0099d351025e6d632d19e9763440cb68fd1ceb4d79eade95dc2c8c24"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.804632 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" event={"ID":"cabe4321-bcb6-4d9e-905f-ab26bbc11b86","Type":"ContainerStarted","Data":"e68ba3e2efbe19e2c938e0a4c80a3c9117953b991aa34ce826bb49d53a5d4d54"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.804792 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807131 4751 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nldk6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807197 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" podUID="cabe4321-bcb6-4d9e-905f-ab26bbc11b86" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" event={"ID":"c1f866c2-11a6-4c9b-8d42-54e5f0a18195","Type":"ContainerStarted","Data":"ffdd2816ecc76f6e2e1c431df20c09cedf555e2412af905682bdfa0dfa33a8aa"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.807826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" event={"ID":"c1f866c2-11a6-4c9b-8d42-54e5f0a18195","Type":"ContainerStarted","Data":"d5595fc9fc0e8edb4853b44cce578c262b541d49bf17b4d539ff3c78acd649bb"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.809510 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" event={"ID":"7aae80f4-df3d-4545-8a9b-5a840e379b65","Type":"ContainerStarted","Data":"4a3ee9dcc1747c0610c42dc8198b7def506f178d1490eea148f0037ff4e5932c"} Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.810005 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tr6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.810044 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.816043 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x98hg" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.830447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.830612 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.330571462 +0000 UTC m=+157.076394111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.831032 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.831572 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.331554347 +0000 UTC m=+157.077377116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.832514 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mw25p" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.832879 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:57 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:57 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:57 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.832929 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.866756 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2l88" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.870433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.888894 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tc4zf" podStartSLOduration=128.888875995 podStartE2EDuration="2m8.888875995s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.887311436 +0000 UTC m=+156.633134085" watchObservedRunningTime="2026-01-30 21:16:57.888875995 +0000 UTC m=+156.634698644" Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.932104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.932287 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.432262069 +0000 UTC m=+157.178084718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:57 crc kubenswrapper[4751]: I0130 21:16:57.937470 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:57 crc kubenswrapper[4751]: E0130 21:16:57.938039 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.438025356 +0000 UTC m=+157.183848005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.000478 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" podStartSLOduration=129.000462734 podStartE2EDuration="2m9.000462734s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:57.996597776 +0000 UTC m=+156.742420425" watchObservedRunningTime="2026-01-30 21:16:58.000462734 +0000 UTC m=+156.746285393" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.039169 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" podStartSLOduration=130.039152419 podStartE2EDuration="2m10.039152419s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.038697397 +0000 UTC m=+156.784520046" watchObservedRunningTime="2026-01-30 21:16:58.039152419 +0000 UTC m=+156.784975068" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.039306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.039552 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.539538649 +0000 UTC m=+157.285361288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.072376 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qthvh" podStartSLOduration=129.072360864 podStartE2EDuration="2m9.072360864s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.069746147 +0000 UTC m=+156.815568796" watchObservedRunningTime="2026-01-30 21:16:58.072360864 +0000 UTC m=+156.818183513" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.104470 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vkfk8" podStartSLOduration=129.10445363 podStartE2EDuration="2m9.10445363s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.103598009 +0000 UTC m=+156.849420658" watchObservedRunningTime="2026-01-30 21:16:58.10445363 +0000 UTC m=+156.850276279" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.140183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.140517 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.640503157 +0000 UTC m=+157.386325806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.247791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.248209 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.748188738 +0000 UTC m=+157.494011387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.248248 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.248604 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.263815 4751 patch_prober.go:28] interesting pod/apiserver-76f77b778f-plkp9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.263881 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" podUID="6d872f03-d4d0-49bc-9758-05060035dafa" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.282158 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n94s8" podStartSLOduration=130.282142611 podStartE2EDuration="2m10.282142611s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.245451448 +0000 UTC m=+156.991274097" watchObservedRunningTime="2026-01-30 21:16:58.282142611 +0000 UTC m=+157.027965260" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.320972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" podStartSLOduration=129.320955599 podStartE2EDuration="2m9.320955599s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.283483505 +0000 UTC m=+157.029306154" watchObservedRunningTime="2026-01-30 21:16:58.320955599 +0000 UTC m=+157.066778248" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.321136 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hrfwj" podStartSLOduration=7.321133644 podStartE2EDuration="7.321133644s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.31944045 +0000 UTC m=+157.065263099" watchObservedRunningTime="2026-01-30 21:16:58.321133644 +0000 UTC m=+157.066956293" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.348737 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.349235 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.849223638 +0000 UTC m=+157.595046287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.392206 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xz8vz" podStartSLOduration=130.392187441 podStartE2EDuration="2m10.392187441s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.347289789 +0000 UTC m=+157.093112438" watchObservedRunningTime="2026-01-30 21:16:58.392187441 +0000 UTC m=+157.138010090" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.393148 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l4lnd" podStartSLOduration=129.393143105 podStartE2EDuration="2m9.393143105s" podCreationTimestamp="2026-01-30 21:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.389956614 +0000 UTC m=+157.135779263" watchObservedRunningTime="2026-01-30 21:16:58.393143105 +0000 UTC m=+157.138965754" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.441705 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-njw5m" podStartSLOduration=130.441686621 podStartE2EDuration="2m10.441686621s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:58.439763382 +0000 UTC m=+157.185586031" watchObservedRunningTime="2026-01-30 21:16:58.441686621 +0000 UTC m=+157.187509270" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.449894 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.450052 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.950028953 +0000 UTC m=+157.695851592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.450404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.450699 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:58.95069172 +0000 UTC m=+157.696514369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.552015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.552133 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.05210396 +0000 UTC m=+157.797926609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.552562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.552935 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.05292383 +0000 UTC m=+157.798746479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.653692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.653834 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.153810387 +0000 UTC m=+157.899633036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.654022 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.654342 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.154319081 +0000 UTC m=+157.900141730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.689407 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.689463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.700097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.755079 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.755409 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.255382802 +0000 UTC m=+158.001205451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.826284 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"f7de1529cb1b6907cd56542193d3fe1dcd309fb7be492dd353b1a2bdba56394c"} Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.829478 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:58 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:58 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:58 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.829689 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.836283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8wjm9" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.846770 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.860147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.860725 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.360712562 +0000 UTC m=+158.106535211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.961830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.962014 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.461985628 +0000 UTC m=+158.207808277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:58 crc kubenswrapper[4751]: I0130 21:16:58.962254 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:58 crc kubenswrapper[4751]: E0130 21:16:58.964791 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.464776629 +0000 UTC m=+158.210599278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.063866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.064229 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.56421504 +0000 UTC m=+158.310037689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.157934 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.158777 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.165347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.165659 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.6656425 +0000 UTC m=+158.411465149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.171168 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.183922 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266342 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.266516 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.766490556 +0000 UTC m=+158.512313195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266548 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.266700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.266952 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.766938977 +0000 UTC m=+158.512761626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.338793 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.339775 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.345100 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.367805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.368147 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.868128621 +0000 UTC m=+158.613951270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.369052 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.369260 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.370098 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.431501 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"certified-operators-54ffx\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469354 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.469459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.469765 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:16:59.969749068 +0000 UTC m=+158.715571717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.471114 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.526359 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.527462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.548283 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.573935 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.574064 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.074045631 +0000 UTC m=+158.819868280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.574660 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.574874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.600099 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"community-operators-wvvq8\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.657859 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675052 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675083 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675118 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.675381 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.175369499 +0000 UTC m=+158.921192148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.675514 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.676130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.714405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"certified-operators-b6k7d\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.729074 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nldk6" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.737993 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.738909 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.765006 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.779177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.779514 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.279499198 +0000 UTC m=+159.025321847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.832474 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:16:59 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:16:59 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:16:59 crc kubenswrapper[4751]: healthz check failed Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.832515 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.846173 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.857825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"5bd0a32fd9ca1552f10c0ac96d767afcae7130b0de77b2177771634405e2dc75"} Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.857865 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" event={"ID":"ac8a7752-ba4b-41eb-a085-b493f6876beb","Type":"ContainerStarted","Data":"44a2bed8fef5945a849baa1e134366bbd773e20e0c67a2d3605650f83b772f73"} Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.877679 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerID="e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51" exitCode=0 Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.878318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerDied","Data":"e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51"} Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.880778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.881022 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.381011251 +0000 UTC m=+159.126833890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.914741 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tw9q7" podStartSLOduration=8.914723868 podStartE2EDuration="8.914723868s" podCreationTimestamp="2026-01-30 21:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:16:59.902249011 +0000 UTC m=+158.648071660" watchObservedRunningTime="2026-01-30 21:16:59.914723868 +0000 UTC m=+158.660546507" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.958919 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985406 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985491 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.985538 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.48551251 +0000 UTC m=+159.231335159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.985797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.987152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.987357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:16:59 crc kubenswrapper[4751]: E0130 21:16:59.989201 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.489184043 +0000 UTC m=+159.235006812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9lsr5" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:16:59 crc kubenswrapper[4751]: I0130 21:16:59.991451 4751 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 21:16:59 crc kubenswrapper[4751]: W0130 21:16:59.992838 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda59ef52d_2f47_42ac_a233_0285be317cc9.slice/crio-3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797 WatchSource:0}: Error finding container 3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797: Status 404 returned error can't find the container with id 3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.009517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"community-operators-tgvqk\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.060294 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.088426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:17:00 crc kubenswrapper[4751]: E0130 21:17:00.088716 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 21:17:00.588702216 +0000 UTC m=+159.334524865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.118365 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.163685 4751 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T21:16:59.991487592Z","Handler":null,"Name":""} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.179551 4751 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.179605 4751 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.190197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.195974 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.196013 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.257918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9lsr5\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.291336 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.310152 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.342053 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.352051 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:00 crc kubenswrapper[4751]: W0130 21:17:00.360061 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41823bd1_3ae0_4f41_847e_d0b35047047c.slice/crio-689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c WatchSource:0}: Error finding container 689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c: Status 404 returned error can't find the container with id 689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.398780 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.593595 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:17:00 crc kubenswrapper[4751]: W0130 21:17:00.602499 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73d0a80a_e569_428a_b251_33f28e06fffd.slice/crio-af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20 WatchSource:0}: Error finding container af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20: Status 404 returned error can't find the container with id af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.829434 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:00 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:00 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:00 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.829503 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.888391 4751 generic.go:334] "Generic (PLEG): container finished" podID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.888475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.888505 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerStarted","Data":"3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.890156 4751 generic.go:334] "Generic (PLEG): container finished" podID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.890217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.890243 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerStarted","Data":"25d69c268722a1234878b44da4db4eac47a853d184bfae913c7a2d4ea1ad28d3"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.892076 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.893199 4751 generic.go:334] "Generic (PLEG): container finished" podID="5607f892-9717-439f-a920-102a2bd3d960" containerID="660c0699f36cdfbc8888077f14b9b8efed6cc41a8b3dc7ca02dfbf3a83512f36" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.893318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"660c0699f36cdfbc8888077f14b9b8efed6cc41a8b3dc7ca02dfbf3a83512f36"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.893444 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerStarted","Data":"d742ab7f8f1e8c741124ab96be31bce53b84eb48f204dc5b3fc704a32bc25d11"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.900630 4751 generic.go:334] "Generic (PLEG): container finished" podID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" exitCode=0 Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.900685 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.900705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerStarted","Data":"689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.905010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerStarted","Data":"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.905059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerStarted","Data":"af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20"} Jan 30 21:17:00 crc kubenswrapper[4751]: I0130 21:17:00.993083 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" podStartSLOduration=132.993064975 podStartE2EDuration="2m12.993064975s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:00.990777347 +0000 UTC m=+159.736599996" watchObservedRunningTime="2026-01-30 21:17:00.993064975 +0000 UTC m=+159.738887634" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.166813 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.167424 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.171093 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.171264 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.174923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.184825 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.205499 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.205558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.306924 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") pod \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307003 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") pod \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307057 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") pod \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\" (UID: \"cc9ed63a-23a2-4b50-a290-0409ff14fd95\") " Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307313 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307418 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.307909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc9ed63a-23a2-4b50-a290-0409ff14fd95" (UID: "cc9ed63a-23a2-4b50-a290-0409ff14fd95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.317400 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:17:01 crc kubenswrapper[4751]: E0130 21:17:01.317606 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerName="collect-profiles" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.317622 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerName="collect-profiles" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.317738 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" containerName="collect-profiles" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.318456 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.325166 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.325196 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx" (OuterVolumeSpecName: "kube-api-access-47wdx") pod "cc9ed63a-23a2-4b50-a290-0409ff14fd95" (UID: "cc9ed63a-23a2-4b50-a290-0409ff14fd95"). InnerVolumeSpecName "kube-api-access-47wdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.329357 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.330415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc9ed63a-23a2-4b50-a290-0409ff14fd95" (UID: "cc9ed63a-23a2-4b50-a290-0409ff14fd95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.336896 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409188 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47wdx\" (UniqueName: \"kubernetes.io/projected/cc9ed63a-23a2-4b50-a290-0409ff14fd95-kube-api-access-47wdx\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409199 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9ed63a-23a2-4b50-a290-0409ff14fd95-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.409209 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9ed63a-23a2-4b50-a290-0409ff14fd95-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.493667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.510744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.510859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.512035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.512099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.512970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.540505 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"redhat-marketplace-6v829\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.667279 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.721442 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.722700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.733692 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.818295 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.818353 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.818441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.830976 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:01 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:01 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:01 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.831018 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.913989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" event={"ID":"cc9ed63a-23a2-4b50-a290-0409ff14fd95","Type":"ContainerDied","Data":"b6ac56afbe946ed8a3114588a856c9022e503d0feb1988aaa10f041f9dcbf7e4"} Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.914036 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ac56afbe946ed8a3114588a856c9022e503d0feb1988aaa10f041f9dcbf7e4" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.914200 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.914215 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.920095 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.920152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.920225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.921548 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.921571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.924837 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.940957 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"redhat-marketplace-8lbjc\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.984038 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 21:17:01 crc kubenswrapper[4751]: I0130 21:17:01.988287 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 21:17:02 crc kubenswrapper[4751]: W0130 21:17:02.002617 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf5428d8a_a5ca_4889_a8b5_6fc7edf2d121.slice/crio-f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8 WatchSource:0}: Error finding container f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8: Status 404 returned error can't find the container with id f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8 Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.046721 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.321188 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.330635 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.331353 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.336145 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.427159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.427220 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.427240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.477719 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.528186 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.528240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.528346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.529043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.529349 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.547622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"redhat-operators-zct7w\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.670761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.715278 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.716541 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.721703 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832112 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:02 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:02 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:02 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.832509 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.934877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.934972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935541 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935642 4751 generic.go:334] "Generic (PLEG): container finished" podID="94e03be5-809d-49ba-9318-6222131628f5" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" exitCode=0 Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.935726 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerStarted","Data":"960e022b4f8bb566d2fdbe8e623c147ebba25b0f4a883e6013345ce05433bda9"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.938744 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.953763 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerStarted","Data":"e519758b7271807d6d89766f5400397363b935aa0b64ac3537487245ff75d044"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.953815 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerStarted","Data":"f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.968490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"redhat-operators-fvkc4\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.970783 4751 generic.go:334] "Generic (PLEG): container finished" podID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerID="01546679b55fd82a5346039e7e8bf30c9a6fe860dba2c776bd0984b001c41248" exitCode=0 Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.971919 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"01546679b55fd82a5346039e7e8bf30c9a6fe860dba2c776bd0984b001c41248"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.971962 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerStarted","Data":"24f317d1701097d9103031354b6663adbe17eff186ff15234f4ba88c7fab3126"} Jan 30 21:17:02 crc kubenswrapper[4751]: I0130 21:17:02.987025 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.9870065270000001 podStartE2EDuration="1.987006527s" podCreationTimestamp="2026-01-30 21:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:02.982658477 +0000 UTC m=+161.728481126" watchObservedRunningTime="2026-01-30 21:17:02.987006527 +0000 UTC m=+161.732829166" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.060205 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.200355 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:17:03 crc kubenswrapper[4751]: W0130 21:17:03.209264 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb05ec0ea_cf7e_46ce_9814_a4597ebcf238.slice/crio-804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4 WatchSource:0}: Error finding container 804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4: Status 404 returned error can't find the container with id 804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4 Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.259757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.266728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-plkp9" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.289712 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.408723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.409014 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.413187 4751 patch_prober.go:28] interesting pod/console-f9d7485db-7bw65 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.416466 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7bw65" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417723 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417756 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-8l2v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417785 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.417781 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8l2v5" podUID="21dc9dc0-702d-49a7-baed-f8e70f6867f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.826800 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.829596 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:03 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:03 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:03 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.829647 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.980776 4751 generic.go:334] "Generic (PLEG): container finished" podID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" exitCode=0 Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.983008 4751 generic.go:334] "Generic (PLEG): container finished" podID="80287af8-6129-4973-8442-887fa4b3ee9f" containerID="4f7f32ebba510377188fdb9f775c5bdc1a0070f2a59bec9d0e32afa0fdd36c30" exitCode=0 Jan 30 21:17:03 crc kubenswrapper[4751]: I0130 21:17:03.989374 4751 generic.go:334] "Generic (PLEG): container finished" podID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerID="e519758b7271807d6d89766f5400397363b935aa0b64ac3537487245ff75d044" exitCode=0 Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011210 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerStarted","Data":"804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"4f7f32ebba510377188fdb9f775c5bdc1a0070f2a59bec9d0e32afa0fdd36c30"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011248 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerStarted","Data":"dc6fc5c63903f1bd0c4e0a90425019daa79c25f9ce21c6dcff83a787794afb40"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.011257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerDied","Data":"e519758b7271807d6d89766f5400397363b935aa0b64ac3537487245ff75d044"} Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.828254 4751 patch_prober.go:28] interesting pod/router-default-5444994796-w42cs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 21:17:04 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 30 21:17:04 crc kubenswrapper[4751]: [+]process-running ok Jan 30 21:17:04 crc kubenswrapper[4751]: healthz check failed Jan 30 21:17:04 crc kubenswrapper[4751]: I0130 21:17:04.828310 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-w42cs" podUID="dfb29a82-8be0-4219-81b1-fecfcb4e1061" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.283016 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400601 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") pod \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400700 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") pod \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\" (UID: \"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121\") " Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" (UID: "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.400955 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.407725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" (UID: "f5428d8a-a5ca-4889-a8b5-6fc7edf2d121"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.502926 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5428d8a-a5ca-4889-a8b5-6fc7edf2d121-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.828919 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.830828 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-w42cs" Jan 30 21:17:05 crc kubenswrapper[4751]: I0130 21:17:05.929418 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.005515 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.013595 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f5428d8a-a5ca-4889-a8b5-6fc7edf2d121","Type":"ContainerDied","Data":"f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8"} Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.013629 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9337cf67ac2e9ca2b6ef7626a4fd708da54aacdace5c42c9e59aa8ae2b8a9f8" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.297197 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hrfwj" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.647595 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:17:06 crc kubenswrapper[4751]: E0130 21:17:06.648023 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerName="pruner" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.648034 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerName="pruner" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.648120 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5428d8a-a5ca-4889-a8b5-6fc7edf2d121" containerName="pruner" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.648435 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.650298 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.650368 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.657536 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.732212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.732260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.833442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.833552 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.833559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.859221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:06 crc kubenswrapper[4751]: I0130 21:17:06.968787 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:11 crc kubenswrapper[4751]: I0130 21:17:11.099438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:17:11 crc kubenswrapper[4751]: I0130 21:17:11.105563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c30a687-0b58-4a63-b9e3-3a3624676358-metrics-certs\") pod \"network-metrics-daemon-c477w\" (UID: \"3c30a687-0b58-4a63-b9e3-3a3624676358\") " pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:17:11 crc kubenswrapper[4751]: I0130 21:17:11.199156 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c477w" Jan 30 21:17:13 crc kubenswrapper[4751]: I0130 21:17:13.413639 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:13 crc kubenswrapper[4751]: I0130 21:17:13.421471 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:17:13 crc kubenswrapper[4751]: I0130 21:17:13.430754 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8l2v5" Jan 30 21:17:20 crc kubenswrapper[4751]: I0130 21:17:20.131269 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 21:17:20 crc kubenswrapper[4751]: I0130 21:17:20.407903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:17:24 crc kubenswrapper[4751]: I0130 21:17:24.126711 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:17:24 crc kubenswrapper[4751]: I0130 21:17:24.127247 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.190088 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.190685 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p25b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tgvqk_openshift-marketplace(5607f892-9717-439f-a920-102a2bd3d960): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.191778 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tgvqk" podUID="5607f892-9717-439f-a920-102a2bd3d960" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.234290 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.234635 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8lbjc_openshift-marketplace(2aa7a824-734e-401d-b0af-ead8bb03dad5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:32 crc kubenswrapper[4751]: E0130 21:17:32.236176 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.756073 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tgvqk" podUID="5607f892-9717-439f-a920-102a2bd3d960" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.756118 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.846262 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.847917 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dh7pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wvvq8_openshift-marketplace(5de678c2-f43a-44fa-ab58-259f765c3e31): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.849494 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wvvq8" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.876033 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.876170 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-54ffx_openshift-marketplace(a59ef52d-2f47-42ac-a233-0285be317cc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.877372 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-54ffx" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.898208 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.898710 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sm6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6v829_openshift-marketplace(94e03be5-809d-49ba-9318-6222131628f5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:33 crc kubenswrapper[4751]: E0130 21:17:33.900084 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" Jan 30 21:17:34 crc kubenswrapper[4751]: I0130 21:17:34.252377 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hmjv6" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.331634 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.331678 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wvvq8" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.333424 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-54ffx" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.371586 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.371751 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ksmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fvkc4_openshift-marketplace(80287af8-6129-4973-8442-887fa4b3ee9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.373117 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fvkc4" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.403514 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.403661 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97l8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zct7w_openshift-marketplace(b05ec0ea-cf7e-46ce-9814-a4597ebcf238): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 21:17:37 crc kubenswrapper[4751]: E0130 21:17:37.405369 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zct7w" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" Jan 30 21:17:37 crc kubenswrapper[4751]: I0130 21:17:37.524394 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c477w"] Jan 30 21:17:37 crc kubenswrapper[4751]: W0130 21:17:37.530520 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c30a687_0b58_4a63_b9e3_3a3624676358.slice/crio-d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f WatchSource:0}: Error finding container d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f: Status 404 returned error can't find the container with id d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f Jan 30 21:17:37 crc kubenswrapper[4751]: I0130 21:17:37.584145 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.205396 4751 generic.go:334] "Generic (PLEG): container finished" podID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" exitCode=0 Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.205836 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.209772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c477w" event={"ID":"3c30a687-0b58-4a63-b9e3-3a3624676358","Type":"ContainerStarted","Data":"619e3e4731fc1aee78a7e7b7e9b131442f16bff59999c94297828e6cc2a19c4e"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.209791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c477w" event={"ID":"3c30a687-0b58-4a63-b9e3-3a3624676358","Type":"ContainerStarted","Data":"036c71febd360410e089e5d20d16b6a20c09c8db293dc1e93730cac18b201cfc"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.209800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c477w" event={"ID":"3c30a687-0b58-4a63-b9e3-3a3624676358","Type":"ContainerStarted","Data":"d3e621d75ce79ffaee93ba12f9e803df2fcc545f6ada1dc30ac5fdc0ee406f5f"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.212789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerStarted","Data":"d87927a76626e017e149cd3548630cdcac05f9e0d61f134b1062a5375f5c4ae4"} Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.212814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerStarted","Data":"7e2659e63802921d7c9a8cefe03242547d2ee43437129c21da56e86b4dd9ee5c"} Jan 30 21:17:38 crc kubenswrapper[4751]: E0130 21:17:38.217519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zct7w" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" Jan 30 21:17:38 crc kubenswrapper[4751]: E0130 21:17:38.217687 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fvkc4" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.282837 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=32.282820549 podStartE2EDuration="32.282820549s" podCreationTimestamp="2026-01-30 21:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:38.277250208 +0000 UTC m=+197.023072857" watchObservedRunningTime="2026-01-30 21:17:38.282820549 +0000 UTC m=+197.028643198" Jan 30 21:17:38 crc kubenswrapper[4751]: I0130 21:17:38.329407 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c477w" podStartSLOduration=170.329388934 podStartE2EDuration="2m50.329388934s" podCreationTimestamp="2026-01-30 21:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:38.327533667 +0000 UTC m=+197.073356336" watchObservedRunningTime="2026-01-30 21:17:38.329388934 +0000 UTC m=+197.075211603" Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.218991 4751 generic.go:334] "Generic (PLEG): container finished" podID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerID="d87927a76626e017e149cd3548630cdcac05f9e0d61f134b1062a5375f5c4ae4" exitCode=0 Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.219197 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerDied","Data":"d87927a76626e017e149cd3548630cdcac05f9e0d61f134b1062a5375f5c4ae4"} Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.223213 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerStarted","Data":"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc"} Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.262827 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6k7d" podStartSLOduration=2.383626132 podStartE2EDuration="40.262811912s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.904552343 +0000 UTC m=+159.650375002" lastFinishedPulling="2026-01-30 21:17:38.783738103 +0000 UTC m=+197.529560782" observedRunningTime="2026-01-30 21:17:39.261289464 +0000 UTC m=+198.007112113" watchObservedRunningTime="2026-01-30 21:17:39.262811912 +0000 UTC m=+198.008634551" Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.846725 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:39 crc kubenswrapper[4751]: I0130 21:17:39.846793 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.445243 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549065 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") pod \"062f5c13-ba50-4901-b4a1-92a8dce64389\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") pod \"062f5c13-ba50-4901-b4a1-92a8dce64389\" (UID: \"062f5c13-ba50-4901-b4a1-92a8dce64389\") " Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549245 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "062f5c13-ba50-4901-b4a1-92a8dce64389" (UID: "062f5c13-ba50-4901-b4a1-92a8dce64389"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.549436 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/062f5c13-ba50-4901-b4a1-92a8dce64389-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.555208 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "062f5c13-ba50-4901-b4a1-92a8dce64389" (UID: "062f5c13-ba50-4901-b4a1-92a8dce64389"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.650350 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/062f5c13-ba50-4901-b4a1-92a8dce64389-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:40 crc kubenswrapper[4751]: I0130 21:17:40.983522 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-b6k7d" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" probeResult="failure" output=< Jan 30 21:17:40 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:17:40 crc kubenswrapper[4751]: > Jan 30 21:17:41 crc kubenswrapper[4751]: I0130 21:17:41.233572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"062f5c13-ba50-4901-b4a1-92a8dce64389","Type":"ContainerDied","Data":"7e2659e63802921d7c9a8cefe03242547d2ee43437129c21da56e86b4dd9ee5c"} Jan 30 21:17:41 crc kubenswrapper[4751]: I0130 21:17:41.234120 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2659e63802921d7c9a8cefe03242547d2ee43437129c21da56e86b4dd9ee5c" Jan 30 21:17:41 crc kubenswrapper[4751]: I0130 21:17:41.233593 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.849078 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:17:44 crc kubenswrapper[4751]: E0130 21:17:44.849993 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerName="pruner" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.850020 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerName="pruner" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.850282 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="062f5c13-ba50-4901-b4a1-92a8dce64389" containerName="pruner" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.851146 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.854690 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.854812 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:17:44 crc kubenswrapper[4751]: I0130 21:17:44.858489 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.008540 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.008796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.109708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.109799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.110231 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.151569 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.171146 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:45 crc kubenswrapper[4751]: I0130 21:17:45.607453 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 21:17:46 crc kubenswrapper[4751]: I0130 21:17:46.264867 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerStarted","Data":"6e554bd68c7601b22fb2c7b3a7e062ba07539d9cb0c6177c7b4edf94cb637484"} Jan 30 21:17:46 crc kubenswrapper[4751]: I0130 21:17:46.265133 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerStarted","Data":"a47739a235fe95d1cbc19228a27614e1f260c6154881c9efaae1076098f63274"} Jan 30 21:17:46 crc kubenswrapper[4751]: I0130 21:17:46.278948 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.278929142 podStartE2EDuration="2.278929142s" podCreationTimestamp="2026-01-30 21:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:46.275317844 +0000 UTC m=+205.021140533" watchObservedRunningTime="2026-01-30 21:17:46.278929142 +0000 UTC m=+205.024751791" Jan 30 21:17:47 crc kubenswrapper[4751]: I0130 21:17:47.270125 4751 generic.go:334] "Generic (PLEG): container finished" podID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerID="6e554bd68c7601b22fb2c7b3a7e062ba07539d9cb0c6177c7b4edf94cb637484" exitCode=0 Jan 30 21:17:47 crc kubenswrapper[4751]: I0130 21:17:47.270163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerDied","Data":"6e554bd68c7601b22fb2c7b3a7e062ba07539d9cb0c6177c7b4edf94cb637484"} Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.517791 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.656834 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") pod \"2910afc2-0fe9-492b-8dcf-ddab577f7685\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.656900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") pod \"2910afc2-0fe9-492b-8dcf-ddab577f7685\" (UID: \"2910afc2-0fe9-492b-8dcf-ddab577f7685\") " Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.656948 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2910afc2-0fe9-492b-8dcf-ddab577f7685" (UID: "2910afc2-0fe9-492b-8dcf-ddab577f7685"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.657149 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2910afc2-0fe9-492b-8dcf-ddab577f7685-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.665360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2910afc2-0fe9-492b-8dcf-ddab577f7685" (UID: "2910afc2-0fe9-492b-8dcf-ddab577f7685"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:48 crc kubenswrapper[4751]: I0130 21:17:48.758710 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2910afc2-0fe9-492b-8dcf-ddab577f7685-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.285348 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerStarted","Data":"e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc"} Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.286627 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2910afc2-0fe9-492b-8dcf-ddab577f7685","Type":"ContainerDied","Data":"a47739a235fe95d1cbc19228a27614e1f260c6154881c9efaae1076098f63274"} Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.286663 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47739a235fe95d1cbc19228a27614e1f260c6154881c9efaae1076098f63274" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.286712 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.912934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:49 crc kubenswrapper[4751]: I0130 21:17:49.964304 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.297751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerStarted","Data":"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b"} Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.305365 4751 generic.go:334] "Generic (PLEG): container finished" podID="94e03be5-809d-49ba-9318-6222131628f5" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" exitCode=0 Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.305568 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855"} Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.312420 4751 generic.go:334] "Generic (PLEG): container finished" podID="5607f892-9717-439f-a920-102a2bd3d960" containerID="e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc" exitCode=0 Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.312482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc"} Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.314506 4751 generic.go:334] "Generic (PLEG): container finished" podID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerID="b07ef308640fd17ca101597385790cdc7d8a83b7a8df7bce4290518e0c697c43" exitCode=0 Jan 30 21:17:50 crc kubenswrapper[4751]: I0130 21:17:50.314572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"b07ef308640fd17ca101597385790cdc7d8a83b7a8df7bce4290518e0c697c43"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.009145 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.320682 4751 generic.go:334] "Generic (PLEG): container finished" podID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" exitCode=0 Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.320968 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.323102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerStarted","Data":"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.325501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerStarted","Data":"10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.327319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerStarted","Data":"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.329296 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerStarted","Data":"242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9"} Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.329490 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6k7d" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" containerID="cri-o://c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" gracePeriod=2 Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.364078 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tgvqk" podStartSLOduration=2.530692689 podStartE2EDuration="52.364059392s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.895937854 +0000 UTC m=+159.641760503" lastFinishedPulling="2026-01-30 21:17:50.729304567 +0000 UTC m=+209.475127206" observedRunningTime="2026-01-30 21:17:51.361017201 +0000 UTC m=+210.106839840" watchObservedRunningTime="2026-01-30 21:17:51.364059392 +0000 UTC m=+210.109882041" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.398635 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6v829" podStartSLOduration=2.5212305329999998 podStartE2EDuration="50.398618589s" podCreationTimestamp="2026-01-30 21:17:01 +0000 UTC" firstStartedPulling="2026-01-30 21:17:02.937569419 +0000 UTC m=+161.683392068" lastFinishedPulling="2026-01-30 21:17:50.814957475 +0000 UTC m=+209.560780124" observedRunningTime="2026-01-30 21:17:51.382211717 +0000 UTC m=+210.128034386" watchObservedRunningTime="2026-01-30 21:17:51.398618589 +0000 UTC m=+210.144441228" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.418092 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8lbjc" podStartSLOduration=2.635512707 podStartE2EDuration="50.418074602s" podCreationTimestamp="2026-01-30 21:17:01 +0000 UTC" firstStartedPulling="2026-01-30 21:17:02.973309978 +0000 UTC m=+161.719132627" lastFinishedPulling="2026-01-30 21:17:50.755871843 +0000 UTC m=+209.501694522" observedRunningTime="2026-01-30 21:17:51.414245087 +0000 UTC m=+210.160067756" watchObservedRunningTime="2026-01-30 21:17:51.418074602 +0000 UTC m=+210.163897271" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.669087 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.669474 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.678181 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.798400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") pod \"41823bd1-3ae0-4f41-847e-d0b35047047c\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.798449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") pod \"41823bd1-3ae0-4f41-847e-d0b35047047c\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.798552 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") pod \"41823bd1-3ae0-4f41-847e-d0b35047047c\" (UID: \"41823bd1-3ae0-4f41-847e-d0b35047047c\") " Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.799415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities" (OuterVolumeSpecName: "utilities") pod "41823bd1-3ae0-4f41-847e-d0b35047047c" (UID: "41823bd1-3ae0-4f41-847e-d0b35047047c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.804402 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8" (OuterVolumeSpecName: "kube-api-access-nj5d8") pod "41823bd1-3ae0-4f41-847e-d0b35047047c" (UID: "41823bd1-3ae0-4f41-847e-d0b35047047c"). InnerVolumeSpecName "kube-api-access-nj5d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842007 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842192 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerName="pruner" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842203 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerName="pruner" Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842218 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842224 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842235 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-utilities" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842241 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-utilities" Jan 30 21:17:51 crc kubenswrapper[4751]: E0130 21:17:51.842251 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-content" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842257 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="extract-content" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842376 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerName="registry-server" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842405 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2910afc2-0fe9-492b-8dcf-ddab577f7685" containerName="pruner" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.842730 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.844765 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.844765 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.849359 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.850655 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41823bd1-3ae0-4f41-847e-d0b35047047c" (UID: "41823bd1-3ae0-4f41-847e-d0b35047047c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.899685 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj5d8\" (UniqueName: \"kubernetes.io/projected/41823bd1-3ae0-4f41-847e-d0b35047047c-kube-api-access-nj5d8\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.899723 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:51 crc kubenswrapper[4751]: I0130 21:17:51.899757 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41823bd1-3ae0-4f41-847e-d0b35047047c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.000997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.001384 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.001439 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.047525 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.047577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.101996 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102106 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102180 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.102215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.135520 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"installer-9-crc\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.155042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.337689 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerStarted","Data":"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.343114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerStarted","Data":"1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355774 4751 generic.go:334] "Generic (PLEG): container finished" podID="41823bd1-3ae0-4f41-847e-d0b35047047c" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" exitCode=0 Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355858 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6k7d" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355895 4751 scope.go:117] "RemoveContainer" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.355885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6k7d" event={"ID":"41823bd1-3ae0-4f41-847e-d0b35047047c","Type":"ContainerDied","Data":"689c5d4239fa30eae8db15cc294718aa0d2dc9d4b894015fbeb7c4691e93d36c"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.363683 4751 generic.go:334] "Generic (PLEG): container finished" podID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" exitCode=0 Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.364516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1"} Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.388089 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-54ffx" podStartSLOduration=2.54675195 podStartE2EDuration="53.388072092s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.891891001 +0000 UTC m=+159.637713680" lastFinishedPulling="2026-01-30 21:17:51.733211173 +0000 UTC m=+210.479033822" observedRunningTime="2026-01-30 21:17:52.384773443 +0000 UTC m=+211.130596092" watchObservedRunningTime="2026-01-30 21:17:52.388072092 +0000 UTC m=+211.133894731" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.391176 4751 scope.go:117] "RemoveContainer" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.407006 4751 scope.go:117] "RemoveContainer" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.420843 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.424948 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6k7d"] Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.428918 4751 scope.go:117] "RemoveContainer" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" Jan 30 21:17:52 crc kubenswrapper[4751]: E0130 21:17:52.429375 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc\": container with ID starting with c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc not found: ID does not exist" containerID="c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.429403 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc"} err="failed to get container status \"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc\": rpc error: code = NotFound desc = could not find container \"c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc\": container with ID starting with c8f63962edb37c7a7dfff9c1266ee610ce21ae1ce8663a8c5a1c0d445db5b1bc not found: ID does not exist" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.429435 4751 scope.go:117] "RemoveContainer" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" Jan 30 21:17:52 crc kubenswrapper[4751]: E0130 21:17:52.430661 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb\": container with ID starting with 49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb not found: ID does not exist" containerID="49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.430707 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb"} err="failed to get container status \"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb\": rpc error: code = NotFound desc = could not find container \"49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb\": container with ID starting with 49530e2d96dab78725c3bbb47b94300babc691cfe0e97af29fdd989cb06346eb not found: ID does not exist" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.430736 4751 scope.go:117] "RemoveContainer" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" Jan 30 21:17:52 crc kubenswrapper[4751]: E0130 21:17:52.431452 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d\": container with ID starting with b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d not found: ID does not exist" containerID="b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.431479 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d"} err="failed to get container status \"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d\": rpc error: code = NotFound desc = could not find container \"b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d\": container with ID starting with b82fbc34d222439d58e4a6b54e9608b232b0b5973669b77f75ff51c83aa9b67d not found: ID does not exist" Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.594513 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 21:17:52 crc kubenswrapper[4751]: W0130 21:17:52.597953 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod937199db_2864_42e7_bd7b_65315d94920f.slice/crio-f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4 WatchSource:0}: Error finding container f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4: Status 404 returned error can't find the container with id f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4 Jan 30 21:17:52 crc kubenswrapper[4751]: I0130 21:17:52.723761 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" probeResult="failure" output=< Jan 30 21:17:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:17:52 crc kubenswrapper[4751]: > Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.091677 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" probeResult="failure" output=< Jan 30 21:17:53 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:17:53 crc kubenswrapper[4751]: > Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.371153 4751 generic.go:334] "Generic (PLEG): container finished" podID="80287af8-6129-4973-8442-887fa4b3ee9f" containerID="1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f" exitCode=0 Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.371211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.374316 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerStarted","Data":"e69a32abf266db71cf32cbc11401a25e95afb6e6d4db9827794b0fd5f381fb26"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.374363 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerStarted","Data":"f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.376109 4751 generic.go:334] "Generic (PLEG): container finished" podID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" exitCode=0 Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.376156 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.380043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerStarted","Data":"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7"} Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.406696 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zct7w" podStartSLOduration=2.563045779 podStartE2EDuration="51.406674199s" podCreationTimestamp="2026-01-30 21:17:02 +0000 UTC" firstStartedPulling="2026-01-30 21:17:03.982107487 +0000 UTC m=+162.727930126" lastFinishedPulling="2026-01-30 21:17:52.825735897 +0000 UTC m=+211.571558546" observedRunningTime="2026-01-30 21:17:53.404417012 +0000 UTC m=+212.150239681" watchObservedRunningTime="2026-01-30 21:17:53.406674199 +0000 UTC m=+212.152496868" Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.419182 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.419158494 podStartE2EDuration="2.419158494s" podCreationTimestamp="2026-01-30 21:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:17:53.416838724 +0000 UTC m=+212.162661383" watchObservedRunningTime="2026-01-30 21:17:53.419158494 +0000 UTC m=+212.164981183" Jan 30 21:17:53 crc kubenswrapper[4751]: I0130 21:17:53.985090 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41823bd1-3ae0-4f41-847e-d0b35047047c" path="/var/lib/kubelet/pods/41823bd1-3ae0-4f41-847e-d0b35047047c/volumes" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127027 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127387 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127427 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127821 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.127874 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125" gracePeriod=600 Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.388642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerStarted","Data":"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470"} Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.391743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerStarted","Data":"3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2"} Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.398534 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125" exitCode=0 Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.399115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125"} Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.415532 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvvq8" podStartSLOduration=2.311230155 podStartE2EDuration="55.415445131s" podCreationTimestamp="2026-01-30 21:16:59 +0000 UTC" firstStartedPulling="2026-01-30 21:17:00.891841919 +0000 UTC m=+159.637664558" lastFinishedPulling="2026-01-30 21:17:53.996056875 +0000 UTC m=+212.741879534" observedRunningTime="2026-01-30 21:17:54.411784412 +0000 UTC m=+213.157607061" watchObservedRunningTime="2026-01-30 21:17:54.415445131 +0000 UTC m=+213.161267780" Jan 30 21:17:54 crc kubenswrapper[4751]: I0130 21:17:54.428719 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvkc4" podStartSLOduration=2.653403521 podStartE2EDuration="52.428707389s" podCreationTimestamp="2026-01-30 21:17:02 +0000 UTC" firstStartedPulling="2026-01-30 21:17:03.983868342 +0000 UTC m=+162.729690991" lastFinishedPulling="2026-01-30 21:17:53.7591722 +0000 UTC m=+212.504994859" observedRunningTime="2026-01-30 21:17:54.427114282 +0000 UTC m=+213.172936931" watchObservedRunningTime="2026-01-30 21:17:54.428707389 +0000 UTC m=+213.174530028" Jan 30 21:17:55 crc kubenswrapper[4751]: I0130 21:17:55.406448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa"} Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.471956 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.472684 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.517671 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.657992 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.658064 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:17:59 crc kubenswrapper[4751]: I0130 21:17:59.693042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.119590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.120478 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.169533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.501292 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.507863 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:18:00 crc kubenswrapper[4751]: I0130 21:18:00.509546 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:01 crc kubenswrapper[4751]: I0130 21:18:01.746911 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:18:01 crc kubenswrapper[4751]: I0130 21:18:01.819986 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.008871 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.133565 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.197661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.674008 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.674466 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.762539 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:02 crc kubenswrapper[4751]: I0130 21:18:02.767546 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.060740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.061061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.101968 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.465001 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tgvqk" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" containerID="cri-o://10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923" gracePeriod=2 Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.518917 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:18:03 crc kubenswrapper[4751]: I0130 21:18:03.533288 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.429608 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.430788 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8lbjc" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" containerID="cri-o://242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9" gracePeriod=2 Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.474106 4751 generic.go:334] "Generic (PLEG): container finished" podID="5607f892-9717-439f-a920-102a2bd3d960" containerID="10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923" exitCode=0 Jan 30 21:18:04 crc kubenswrapper[4751]: I0130 21:18:04.474196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923"} Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.484805 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgvqk" event={"ID":"5607f892-9717-439f-a920-102a2bd3d960","Type":"ContainerDied","Data":"d742ab7f8f1e8c741124ab96be31bce53b84eb48f204dc5b3fc704a32bc25d11"} Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.485292 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d742ab7f8f1e8c741124ab96be31bce53b84eb48f204dc5b3fc704a32bc25d11" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.488211 4751 generic.go:334] "Generic (PLEG): container finished" podID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerID="242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9" exitCode=0 Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.488671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9"} Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.516356 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.573817 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") pod \"5607f892-9717-439f-a920-102a2bd3d960\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.574050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") pod \"5607f892-9717-439f-a920-102a2bd3d960\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.574094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") pod \"5607f892-9717-439f-a920-102a2bd3d960\" (UID: \"5607f892-9717-439f-a920-102a2bd3d960\") " Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.576095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities" (OuterVolumeSpecName: "utilities") pod "5607f892-9717-439f-a920-102a2bd3d960" (UID: "5607f892-9717-439f-a920-102a2bd3d960"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.585704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9" (OuterVolumeSpecName: "kube-api-access-p25b9") pod "5607f892-9717-439f-a920-102a2bd3d960" (UID: "5607f892-9717-439f-a920-102a2bd3d960"). InnerVolumeSpecName "kube-api-access-p25b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.647876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5607f892-9717-439f-a920-102a2bd3d960" (UID: "5607f892-9717-439f-a920-102a2bd3d960"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.676847 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.676876 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5607f892-9717-439f-a920-102a2bd3d960-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:05 crc kubenswrapper[4751]: I0130 21:18:05.676886 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p25b9\" (UniqueName: \"kubernetes.io/projected/5607f892-9717-439f-a920-102a2bd3d960-kube-api-access-p25b9\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.498281 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgvqk" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.528434 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.534395 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tgvqk"] Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.681992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.706519 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") pod \"2aa7a824-734e-401d-b0af-ead8bb03dad5\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.706814 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") pod \"2aa7a824-734e-401d-b0af-ead8bb03dad5\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.708753 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities" (OuterVolumeSpecName: "utilities") pod "2aa7a824-734e-401d-b0af-ead8bb03dad5" (UID: "2aa7a824-734e-401d-b0af-ead8bb03dad5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.763109 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa7a824-734e-401d-b0af-ead8bb03dad5" (UID: "2aa7a824-734e-401d-b0af-ead8bb03dad5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.808123 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") pod \"2aa7a824-734e-401d-b0af-ead8bb03dad5\" (UID: \"2aa7a824-734e-401d-b0af-ead8bb03dad5\") " Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.809131 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.809583 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.809612 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa7a824-734e-401d-b0af-ead8bb03dad5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.810613 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvkc4" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" containerID="cri-o://3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2" gracePeriod=2 Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.814857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc" (OuterVolumeSpecName: "kube-api-access-jskpc") pod "2aa7a824-734e-401d-b0af-ead8bb03dad5" (UID: "2aa7a824-734e-401d-b0af-ead8bb03dad5"). InnerVolumeSpecName "kube-api-access-jskpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:06 crc kubenswrapper[4751]: I0130 21:18:06.911242 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jskpc\" (UniqueName: \"kubernetes.io/projected/2aa7a824-734e-401d-b0af-ead8bb03dad5-kube-api-access-jskpc\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.504616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lbjc" event={"ID":"2aa7a824-734e-401d-b0af-ead8bb03dad5","Type":"ContainerDied","Data":"24f317d1701097d9103031354b6663adbe17eff186ff15234f4ba88c7fab3126"} Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.504665 4751 scope.go:117] "RemoveContainer" containerID="242b44373e4553b6a95b1dab9ee35d628ad1d218dbe55524005712a0987bb4b9" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.504762 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lbjc" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.518165 4751 generic.go:334] "Generic (PLEG): container finished" podID="80287af8-6129-4973-8442-887fa4b3ee9f" containerID="3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2" exitCode=0 Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.518207 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2"} Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.528895 4751 scope.go:117] "RemoveContainer" containerID="b07ef308640fd17ca101597385790cdc7d8a83b7a8df7bce4290518e0c697c43" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.550730 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.553413 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lbjc"] Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.561171 4751 scope.go:117] "RemoveContainer" containerID="01546679b55fd82a5346039e7e8bf30c9a6fe860dba2c776bd0984b001c41248" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.827640 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.986352 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" path="/var/lib/kubelet/pods/2aa7a824-734e-401d-b0af-ead8bb03dad5/volumes" Jan 30 21:18:07 crc kubenswrapper[4751]: I0130 21:18:07.987153 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5607f892-9717-439f-a920-102a2bd3d960" path="/var/lib/kubelet/pods/5607f892-9717-439f-a920-102a2bd3d960/volumes" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.024956 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") pod \"80287af8-6129-4973-8442-887fa4b3ee9f\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.025050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") pod \"80287af8-6129-4973-8442-887fa4b3ee9f\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.025089 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") pod \"80287af8-6129-4973-8442-887fa4b3ee9f\" (UID: \"80287af8-6129-4973-8442-887fa4b3ee9f\") " Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.027274 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities" (OuterVolumeSpecName: "utilities") pod "80287af8-6129-4973-8442-887fa4b3ee9f" (UID: "80287af8-6129-4973-8442-887fa4b3ee9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.040063 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq" (OuterVolumeSpecName: "kube-api-access-5ksmq") pod "80287af8-6129-4973-8442-887fa4b3ee9f" (UID: "80287af8-6129-4973-8442-887fa4b3ee9f"). InnerVolumeSpecName "kube-api-access-5ksmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.127371 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.127417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ksmq\" (UniqueName: \"kubernetes.io/projected/80287af8-6129-4973-8442-887fa4b3ee9f-kube-api-access-5ksmq\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.195364 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80287af8-6129-4973-8442-887fa4b3ee9f" (UID: "80287af8-6129-4973-8442-887fa4b3ee9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.228392 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80287af8-6129-4973-8442-887fa4b3ee9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.530989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvkc4" event={"ID":"80287af8-6129-4973-8442-887fa4b3ee9f","Type":"ContainerDied","Data":"dc6fc5c63903f1bd0c4e0a90425019daa79c25f9ce21c6dcff83a787794afb40"} Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.531048 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvkc4" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.531392 4751 scope.go:117] "RemoveContainer" containerID="3421b4190428564de2526db739509fd62498485491cdb7f40a973dab016062f2" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.557022 4751 scope.go:117] "RemoveContainer" containerID="1390ec748689f89777a1f1c34363a9724760856f9473679e8a6408ff0a08227f" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.573178 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.582838 4751 scope.go:117] "RemoveContainer" containerID="4f7f32ebba510377188fdb9f775c5bdc1a0070f2a59bec9d0e32afa0fdd36c30" Jan 30 21:18:08 crc kubenswrapper[4751]: I0130 21:18:08.582839 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvkc4"] Jan 30 21:18:09 crc kubenswrapper[4751]: I0130 21:18:09.986237 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" path="/var/lib/kubelet/pods/80287af8-6129-4973-8442-887fa4b3ee9f/volumes" Jan 30 21:18:27 crc kubenswrapper[4751]: I0130 21:18:27.794975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" containerID="cri-o://c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" gracePeriod=15 Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.206713 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.277980 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5f7896898-cgrzp"] Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278649 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278670 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278684 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278695 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278712 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278723 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278746 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278757 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278772 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278782 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278806 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278820 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278832 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278849 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278860 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-content" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278880 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278890 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.278907 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.278917 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="extract-utilities" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279074 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5607f892-9717-439f-a920-102a2bd3d960" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279096 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa7a824-734e-401d-b0af-ead8bb03dad5" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279113 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80287af8-6129-4973-8442-887fa4b3ee9f" containerName="registry-server" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279144 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a52a543-c530-48d9-a046-ac4008df0477" containerName="oauth-openshift" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.279809 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.287061 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f7896898-cgrzp"] Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.320852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.321154 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322401 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322555 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322776 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.322900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323045 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324378 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324973 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") pod \"8a52a543-c530-48d9-a046-ac4008df0477\" (UID: \"8a52a543-c530-48d9-a046-ac4008df0477\") " Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-audit-policies\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326057 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326408 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326682 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.326852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327003 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kgz\" (UniqueName: \"kubernetes.io/projected/40e354d8-a733-4531-b68c-d44b182050f3-kube-api-access-d6kgz\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40e354d8-a733-4531-b68c-d44b182050f3-audit-dir\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327474 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-session\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323097 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327607 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.323743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.324819 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.325588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.327769 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.329970 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.330669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.342026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm" (OuterVolumeSpecName: "kube-api-access-qkplm") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "kube-api-access-qkplm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.342175 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.342645 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.343632 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.343865 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.344052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.344380 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8a52a543-c530-48d9-a046-ac4008df0477" (UID: "8a52a543-c530-48d9-a046-ac4008df0477"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428592 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kgz\" (UniqueName: \"kubernetes.io/projected/40e354d8-a733-4531-b68c-d44b182050f3-kube-api-access-d6kgz\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40e354d8-a733-4531-b68c-d44b182050f3-audit-dir\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-session\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428739 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428914 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.428964 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-audit-policies\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429127 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429276 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkplm\" (UniqueName: \"kubernetes.io/projected/8a52a543-c530-48d9-a046-ac4008df0477-kube-api-access-qkplm\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429308 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429364 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429389 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429416 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429445 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429473 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429500 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429527 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429568 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a52a543-c530-48d9-a046-ac4008df0477-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429570 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40e354d8-a733-4531-b68c-d44b182050f3-audit-dir\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429593 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429690 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.429725 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a52a543-c530-48d9-a046-ac4008df0477-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.430651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.431510 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.433826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.435008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-error\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.435058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.435678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-audit-policies\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.436254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-session\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.437118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-user-template-login\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.437573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.437646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.441067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.441864 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40e354d8-a733-4531-b68c-d44b182050f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.459255 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kgz\" (UniqueName: \"kubernetes.io/projected/40e354d8-a733-4531-b68c-d44b182050f3-kube-api-access-d6kgz\") pod \"oauth-openshift-5f7896898-cgrzp\" (UID: \"40e354d8-a733-4531-b68c-d44b182050f3\") " pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.610152 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648831 4751 generic.go:334] "Generic (PLEG): container finished" podID="8a52a543-c530-48d9-a046-ac4008df0477" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" exitCode=0 Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648892 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerDied","Data":"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e"} Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" event={"ID":"8a52a543-c530-48d9-a046-ac4008df0477","Type":"ContainerDied","Data":"85f9f12a183ee9ac32edf469f266b83c69141757b64a96e9390b64f35e4d5e44"} Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648958 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dcxn" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.648966 4751 scope.go:117] "RemoveContainer" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.681866 4751 scope.go:117] "RemoveContainer" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" Jan 30 21:18:28 crc kubenswrapper[4751]: E0130 21:18:28.682956 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e\": container with ID starting with c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e not found: ID does not exist" containerID="c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.683008 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e"} err="failed to get container status \"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e\": rpc error: code = NotFound desc = could not find container \"c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e\": container with ID starting with c055a298ab4ac470125ea52e0402fa36c68ae7885b742532ed40b326547b365e not found: ID does not exist" Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.708740 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.714642 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dcxn"] Jan 30 21:18:28 crc kubenswrapper[4751]: I0130 21:18:28.904388 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f7896898-cgrzp"] Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.657163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" event={"ID":"40e354d8-a733-4531-b68c-d44b182050f3","Type":"ContainerStarted","Data":"fa1d9120b4afe6269fc9623b0dce2ed9a09009cdcee2400d35f01181f26e66d3"} Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.657628 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.657645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" event={"ID":"40e354d8-a733-4531-b68c-d44b182050f3","Type":"ContainerStarted","Data":"1415451051e5a56955dbb44bc54385482c06f8c731615dee5b327feb9a6fecf4"} Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.688754 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" podStartSLOduration=27.688735288 podStartE2EDuration="27.688735288s" podCreationTimestamp="2026-01-30 21:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:18:29.686888633 +0000 UTC m=+248.432711282" watchObservedRunningTime="2026-01-30 21:18:29.688735288 +0000 UTC m=+248.434557947" Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.863661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5f7896898-cgrzp" Jan 30 21:18:29 crc kubenswrapper[4751]: I0130 21:18:29.981660 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a52a543-c530-48d9-a046-ac4008df0477" path="/var/lib/kubelet/pods/8a52a543-c530-48d9-a046-ac4008df0477/volumes" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.391429 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393017 4751 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393255 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393659 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.393885 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.394001 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.394050 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.394677 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" gracePeriod=15 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.395734 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396312 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396385 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396485 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396501 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396554 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396573 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396601 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396618 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396686 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396706 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.396733 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.396749 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397079 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397113 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397131 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397153 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397178 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397203 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.397567 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.397602 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.455927 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482452 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482518 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482600 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.482763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583411 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583587 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.583657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.602275 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.602351 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.670957 4751 generic.go:334] "Generic (PLEG): container finished" podID="937199db-2864-42e7-bd7b-65315d94920f" containerID="e69a32abf266db71cf32cbc11401a25e95afb6e6d4db9827794b0fd5f381fb26" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.671095 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerDied","Data":"e69a32abf266db71cf32cbc11401a25e95afb6e6d4db9827794b0fd5f381fb26"} Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.672163 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.672660 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.672967 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.674388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.675983 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676899 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676922 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676931 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" exitCode=0 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.676943 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" exitCode=2 Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.677006 4751 scope.go:117] "RemoveContainer" containerID="0501ffcb31dab5f2248cea8e71d30f16ed104417bd8f35bf0f646e300bc94d63" Jan 30 21:18:30 crc kubenswrapper[4751]: I0130 21:18:30.757053 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:18:30 crc kubenswrapper[4751]: W0130 21:18:30.782574 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad WatchSource:0}: Error finding container 742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad: Status 404 returned error can't find the container with id 742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad Jan 30 21:18:30 crc kubenswrapper[4751]: E0130 21:18:30.786458 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f9eeb030f560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,LastTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.688102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b"} Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.688654 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"742a27aba0703b555c7fdffcc4a61f692930de201498f6f95a1e97db92dc34ad"} Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.689098 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.689537 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.689874 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.693126 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.904196 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.905990 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.906690 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.907403 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.908058 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.908107 4751 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 21:18:31 crc kubenswrapper[4751]: E0130 21:18:31.908672 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="200ms" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.980503 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.980962 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:31 crc kubenswrapper[4751]: I0130 21:18:31.981397 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.034276 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.034983 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.035533 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: E0130 21:18:32.109828 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="400ms" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") pod \"937199db-2864-42e7-bd7b-65315d94920f\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") pod \"937199db-2864-42e7-bd7b-65315d94920f\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") pod \"937199db-2864-42e7-bd7b-65315d94920f\" (UID: \"937199db-2864-42e7-bd7b-65315d94920f\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117481 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "937199db-2864-42e7-bd7b-65315d94920f" (UID: "937199db-2864-42e7-bd7b-65315d94920f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock" (OuterVolumeSpecName: "var-lock") pod "937199db-2864-42e7-bd7b-65315d94920f" (UID: "937199db-2864-42e7-bd7b-65315d94920f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117950 4751 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.117988 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/937199db-2864-42e7-bd7b-65315d94920f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.125211 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "937199db-2864-42e7-bd7b-65315d94920f" (UID: "937199db-2864-42e7-bd7b-65315d94920f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.219049 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/937199db-2864-42e7-bd7b-65315d94920f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: E0130 21:18:32.512949 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="800ms" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.700772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"937199db-2864-42e7-bd7b-65315d94920f","Type":"ContainerDied","Data":"f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4"} Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.701078 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f462afd414faa46099f0f64a6a8955052851c4a4930523036193d340eca901c4" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.700798 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.768891 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.769350 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.773623 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.774376 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.774803 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.775308 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.775727 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827678 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.827925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.928491 4751 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.928806 4751 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:32 crc kubenswrapper[4751]: I0130 21:18:32.928896 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.314382 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="1.6s" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.710285 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.711489 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" exitCode=0 Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.711557 4751 scope.go:117] "RemoveContainer" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.711556 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.728858 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.729253 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.729847 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.733267 4751 scope.go:117] "RemoveContainer" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.749482 4751 scope.go:117] "RemoveContainer" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.764283 4751 scope.go:117] "RemoveContainer" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.781533 4751 scope.go:117] "RemoveContainer" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.809472 4751 scope.go:117] "RemoveContainer" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.834937 4751 scope.go:117] "RemoveContainer" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.835668 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\": container with ID starting with a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d not found: ID does not exist" containerID="a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.835741 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d"} err="failed to get container status \"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\": rpc error: code = NotFound desc = could not find container \"a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d\": container with ID starting with a73993d4f4dc15915a933d91078e19a3211ab8090057596b06b3934c18cd0b4d not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.835772 4751 scope.go:117] "RemoveContainer" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.836342 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\": container with ID starting with 3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b not found: ID does not exist" containerID="3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836373 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b"} err="failed to get container status \"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\": rpc error: code = NotFound desc = could not find container \"3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b\": container with ID starting with 3c540c0ad669fa6fdfa55b3e83d38d922a963fd2fccd2e9a37dd9a4bced5143b not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836418 4751 scope.go:117] "RemoveContainer" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.836798 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\": container with ID starting with b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9 not found: ID does not exist" containerID="b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836831 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9"} err="failed to get container status \"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\": rpc error: code = NotFound desc = could not find container \"b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9\": container with ID starting with b4c83e1a92d4ff5cb9568e791e2b4cced83c3381a442f4fe595e0167339e8da9 not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.836852 4751 scope.go:117] "RemoveContainer" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.837339 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\": container with ID starting with f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1 not found: ID does not exist" containerID="f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837368 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1"} err="failed to get container status \"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\": rpc error: code = NotFound desc = could not find container \"f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1\": container with ID starting with f64ea2c463a1de5a6959ed07ecbd685678a06f71a87df38461e6bb3fb2bac4b1 not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837385 4751 scope.go:117] "RemoveContainer" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.837769 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\": container with ID starting with 06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1 not found: ID does not exist" containerID="06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837798 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1"} err="failed to get container status \"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\": rpc error: code = NotFound desc = could not find container \"06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1\": container with ID starting with 06c8f0557fe2dd830bcef882161bd9abf4448062effd515eb72721765fc72dc1 not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.837818 4751 scope.go:117] "RemoveContainer" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.838394 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\": container with ID starting with 6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac not found: ID does not exist" containerID="6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.838444 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac"} err="failed to get container status \"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\": rpc error: code = NotFound desc = could not find container \"6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac\": container with ID starting with 6dc50148ca87db26026e3c2175ed3601fa8c7418f3bb948713a338c1cad0cfac not found: ID does not exist" Jan 30 21:18:33 crc kubenswrapper[4751]: E0130 21:18:33.844351 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f9eeb030f560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,LastTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:18:33 crc kubenswrapper[4751]: I0130 21:18:33.984069 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 21:18:34 crc kubenswrapper[4751]: E0130 21:18:34.038630 4751 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" volumeName="registry-storage" Jan 30 21:18:34 crc kubenswrapper[4751]: E0130 21:18:34.915558 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="3.2s" Jan 30 21:18:38 crc kubenswrapper[4751]: E0130 21:18:38.117786 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="6.4s" Jan 30 21:18:41 crc kubenswrapper[4751]: I0130 21:18:41.980966 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:41 crc kubenswrapper[4751]: I0130 21:18:41.981970 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.465169 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T21:18:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.466222 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.466750 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.467151 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.467644 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.467748 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 21:18:43 crc kubenswrapper[4751]: E0130 21:18:43.849678 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f9eeb030f560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,LastTimestamp:2026-01-30 21:18:30.785734156 +0000 UTC m=+249.531556845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 21:18:44 crc kubenswrapper[4751]: E0130 21:18:44.519429 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="7s" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.804439 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.805428 4751 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817" exitCode=1 Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.805604 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817"} Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.807135 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.807429 4751 scope.go:117] "RemoveContainer" containerID="1f2d1fedb4948ac306caf82f3731663cb06c886ba802122fbc72b2b3df649817" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.807928 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.808652 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.974965 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.976315 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.977027 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:45 crc kubenswrapper[4751]: I0130 21:18:45.977929 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.003425 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.003469 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: E0130 21:18:46.003999 4751 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.004696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:46 crc kubenswrapper[4751]: W0130 21:18:46.026485 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323 WatchSource:0}: Error finding container a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323: Status 404 returned error can't find the container with id a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323 Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.817273 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.817653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1496ba1e480a418e497f3254ba5327a20e6b8be7abe0b396c67111c2b65c5bd9"} Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.818896 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.819766 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.820316 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.821492 4751 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d99321ea5199b32b855c4fcaa1a3b37457fe01201f0df65df1111a4d4a66d348" exitCode=0 Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.821559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d99321ea5199b32b855c4fcaa1a3b37457fe01201f0df65df1111a4d4a66d348"} Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.821658 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a4f22507a6831e8bdf44cb0120e0cb15ef5ae89d8edd74b9ca8137c01723f323"} Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.822170 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.822213 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.822657 4751 status_manager.go:851] "Failed to get status for pod" podUID="937199db-2864-42e7-bd7b-65315d94920f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: E0130 21:18:46.823003 4751 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.823276 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:46 crc kubenswrapper[4751]: I0130 21:18:46.823881 4751 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Jan 30 21:18:47 crc kubenswrapper[4751]: I0130 21:18:47.856653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c98f5864535909a1040c1af21dbb4bd51f1db489025cdf753cd7a8c6011df807"} Jan 30 21:18:48 crc kubenswrapper[4751]: I0130 21:18:48.869309 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"09c71adc517c1e66fa1618a0fb0e7caa43a685a2dcadd1e74b314230758f0db7"} Jan 30 21:18:48 crc kubenswrapper[4751]: I0130 21:18:48.870527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e7b269e0dd7975a09103c23ed8e54ece7b67064e81475a5b6f7916060d7a54c"} Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4a79523122416de0c1b2f9b74648052fc9efbf66b68ae6f5bd030f24e187ec95"} Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879714 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"080f907c4e15823a04b790dd2ccabbce01750bf18b95a319e6d5487004e32be6"} Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879762 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879919 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:49 crc kubenswrapper[4751]: I0130 21:18:49.879955 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:50 crc kubenswrapper[4751]: I0130 21:18:50.451042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:18:50 crc kubenswrapper[4751]: I0130 21:18:50.800411 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:18:50 crc kubenswrapper[4751]: I0130 21:18:50.806932 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:18:51 crc kubenswrapper[4751]: I0130 21:18:51.005556 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:51 crc kubenswrapper[4751]: I0130 21:18:51.006032 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:51 crc kubenswrapper[4751]: I0130 21:18:51.015129 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.892607 4751 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.917405 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.917432 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.923251 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:18:54 crc kubenswrapper[4751]: I0130 21:18:54.980778 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f6c84dee-176a-4c58-8eea-812049fd208b" Jan 30 21:18:55 crc kubenswrapper[4751]: I0130 21:18:55.923372 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:55 crc kubenswrapper[4751]: I0130 21:18:55.923419 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="89512b2a-f57e-4242-9c12-8b8c660dc530" Jan 30 21:18:55 crc kubenswrapper[4751]: I0130 21:18:55.926938 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f6c84dee-176a-4c58-8eea-812049fd208b" Jan 30 21:19:00 crc kubenswrapper[4751]: I0130 21:19:00.460521 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 21:19:03 crc kubenswrapper[4751]: I0130 21:19:03.696767 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 21:19:03 crc kubenswrapper[4751]: I0130 21:19:03.849784 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.685566 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.726481 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.734794 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.879998 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 21:19:04 crc kubenswrapper[4751]: I0130 21:19:04.910252 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.167735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.195542 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.220180 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.559413 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.710723 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.838983 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.843256 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.850701 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.896194 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 21:19:05 crc kubenswrapper[4751]: I0130 21:19:05.906727 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.354428 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.648682 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.788087 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.903842 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.933419 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:19:06 crc kubenswrapper[4751]: I0130 21:19:06.952603 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.094751 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.152216 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.166839 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.234891 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.255540 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.329231 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.411853 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.516817 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.547195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.751469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.797479 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.843008 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.902737 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.931577 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.936545 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.951077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 21:19:07 crc kubenswrapper[4751]: I0130 21:19:07.962582 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.001205 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.038628 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.077309 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.190650 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.208185 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.258686 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.316060 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.400191 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.425716 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.482928 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.486112 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.486096768 podStartE2EDuration="38.486096768s" podCreationTimestamp="2026-01-30 21:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:18:54.948466312 +0000 UTC m=+273.694289011" watchObservedRunningTime="2026-01-30 21:19:08.486096768 +0000 UTC m=+287.231919427" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.487789 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.487840 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.494234 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.516128 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.516094708 podStartE2EDuration="14.516094708s" podCreationTimestamp="2026-01-30 21:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:19:08.506462086 +0000 UTC m=+287.252284745" watchObservedRunningTime="2026-01-30 21:19:08.516094708 +0000 UTC m=+287.261917397" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.646876 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.690802 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.725782 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.737260 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.763032 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.824077 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.906410 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 21:19:08 crc kubenswrapper[4751]: I0130 21:19:08.995390 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.004314 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.030786 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.116942 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.212058 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.234440 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.255364 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.268044 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.277779 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.283050 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.391944 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.614739 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.662027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.734904 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.806664 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.835187 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.840594 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.874933 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.882885 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 21:19:09 crc kubenswrapper[4751]: I0130 21:19:09.985605 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.000618 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.029219 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.069611 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.147019 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.241737 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.331291 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.506485 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.618164 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.626154 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.933774 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 21:19:10 crc kubenswrapper[4751]: I0130 21:19:10.946934 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.069507 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.111187 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.144483 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.235995 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.311110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.481663 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.505437 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.521746 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.529179 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.551401 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.570405 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.588635 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.597360 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.694817 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.696196 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.770385 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 21:19:11 crc kubenswrapper[4751]: I0130 21:19:11.798504 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.149411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.326800 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.430086 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.466563 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.524309 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.555505 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.589735 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.660412 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.675871 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.679142 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.733671 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.759588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.837389 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.842492 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.858744 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.894180 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.921112 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 21:19:12 crc kubenswrapper[4751]: I0130 21:19:12.952814 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.001381 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.005270 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.047860 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.124003 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.263313 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.276505 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.319549 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.451034 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.551406 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.592237 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.595562 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.610222 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.658623 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.669123 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.744835 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.888801 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.921909 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 21:19:13 crc kubenswrapper[4751]: I0130 21:19:13.946115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.023201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.072126 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.157282 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.195861 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.201916 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.215546 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.272432 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.282228 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.407680 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.416140 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.661730 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.671148 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.687226 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.690133 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.767843 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.804474 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.848195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 21:19:14 crc kubenswrapper[4751]: I0130 21:19:14.894478 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.009974 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.085684 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.103962 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.133045 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.263765 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.283118 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.301760 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.315683 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.318598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.350762 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.357740 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.371440 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.377957 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.530299 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.609574 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.626482 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.673049 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.700408 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.735270 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.759160 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 21:19:15 crc kubenswrapper[4751]: I0130 21:19:15.853921 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.005027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.010283 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.113778 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.114468 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.148997 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.195935 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.226387 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.233932 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.273435 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.315889 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.370202 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.379217 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.403812 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.420498 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.515801 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.527646 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.536588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.699624 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.847976 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.889882 4751 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.890643 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b" gracePeriod=5 Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.900818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 21:19:16 crc kubenswrapper[4751]: I0130 21:19:16.954490 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.061725 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.183097 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.356894 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.800422 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.817543 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.872505 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 21:19:17 crc kubenswrapper[4751]: I0130 21:19:17.906606 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.116892 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.180499 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.693463 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.748806 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.826450 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.858169 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.871518 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 21:19:18 crc kubenswrapper[4751]: I0130 21:19:18.883536 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.036973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.229588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.503455 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.512456 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.602135 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.688268 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 21:19:19 crc kubenswrapper[4751]: I0130 21:19:19.754749 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 21:19:20 crc kubenswrapper[4751]: I0130 21:19:20.030467 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 21:19:20 crc kubenswrapper[4751]: I0130 21:19:20.198518 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 21:19:20 crc kubenswrapper[4751]: I0130 21:19:20.446963 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 21:19:21 crc kubenswrapper[4751]: I0130 21:19:21.754581 4751 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.091751 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.091807 4751 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b" exitCode=137 Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.504863 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.505475 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560260 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560659 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561162 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.560968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561418 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561519 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561864 4751 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561898 4751 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.561946 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.562253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.572916 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.662590 4751 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.662990 4751 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:22 crc kubenswrapper[4751]: I0130 21:19:22.663009 4751 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.102885 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.102994 4751 scope.go:117] "RemoveContainer" containerID="07daea61c26f8db733b7f11ca40decf928b78f0906e37dcb22d9a1f26b54e84b" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.103076 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.986561 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.987017 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.998449 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:19:23 crc kubenswrapper[4751]: I0130 21:19:23.998497 4751 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f3ee0bd4-4fdb-4ee0-858f-860a616d1460" Jan 30 21:19:24 crc kubenswrapper[4751]: I0130 21:19:24.002392 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 21:19:24 crc kubenswrapper[4751]: I0130 21:19:24.002420 4751 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f3ee0bd4-4fdb-4ee0-858f-860a616d1460" Jan 30 21:19:29 crc kubenswrapper[4751]: I0130 21:19:29.391993 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 21:19:30 crc kubenswrapper[4751]: I0130 21:19:30.894572 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 21:19:32 crc kubenswrapper[4751]: I0130 21:19:32.143186 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 21:19:33 crc kubenswrapper[4751]: I0130 21:19:33.166411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 21:19:33 crc kubenswrapper[4751]: I0130 21:19:33.955666 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 21:19:37 crc kubenswrapper[4751]: I0130 21:19:37.861436 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 21:19:37 crc kubenswrapper[4751]: I0130 21:19:37.886672 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 21:19:37 crc kubenswrapper[4751]: I0130 21:19:37.928507 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 21:19:40 crc kubenswrapper[4751]: I0130 21:19:40.635423 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 21:19:42 crc kubenswrapper[4751]: I0130 21:19:42.286976 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 21:19:42 crc kubenswrapper[4751]: I0130 21:19:42.539959 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 21:19:44 crc kubenswrapper[4751]: I0130 21:19:44.087473 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 21:19:44 crc kubenswrapper[4751]: I0130 21:19:44.978081 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 21:19:45 crc kubenswrapper[4751]: I0130 21:19:45.416973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 21:19:45 crc kubenswrapper[4751]: I0130 21:19:45.953195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 21:19:47 crc kubenswrapper[4751]: I0130 21:19:47.554574 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 21:19:48 crc kubenswrapper[4751]: I0130 21:19:48.690462 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 21:19:49 crc kubenswrapper[4751]: I0130 21:19:49.776553 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 21:19:49 crc kubenswrapper[4751]: I0130 21:19:49.952077 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 21:19:50 crc kubenswrapper[4751]: I0130 21:19:50.003772 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 21:19:50 crc kubenswrapper[4751]: I0130 21:19:50.551775 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 21:19:51 crc kubenswrapper[4751]: I0130 21:19:51.626511 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 21:19:54 crc kubenswrapper[4751]: I0130 21:19:54.066201 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 21:19:54 crc kubenswrapper[4751]: I0130 21:19:54.126922 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:19:54 crc kubenswrapper[4751]: I0130 21:19:54.127001 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:19:55 crc kubenswrapper[4751]: I0130 21:19:55.337514 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 21:19:55 crc kubenswrapper[4751]: I0130 21:19:55.766798 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 21:20:02 crc kubenswrapper[4751]: I0130 21:20:02.646079 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.609165 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.610020 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" containerID="cri-o://4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" gracePeriod=30 Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.618552 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:20:08 crc kubenswrapper[4751]: I0130 21:20:08.618988 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" containerID="cri-o://f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" gracePeriod=30 Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.006081 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.011673 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045533 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045606 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045650 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045681 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045704 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045775 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") pod \"61e09136-e0d4-4c75-ad01-543778867411\" (UID: \"61e09136-e0d4-4c75-ad01-543778867411\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045797 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.045835 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") pod \"322809f5-4f4c-487e-8488-6c62bac86f8f\" (UID: \"322809f5-4f4c-487e-8488-6c62bac86f8f\") " Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.047351 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca" (OuterVolumeSpecName: "client-ca") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.050412 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config" (OuterVolumeSpecName: "config") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.053756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config" (OuterVolumeSpecName: "config") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.053947 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca" (OuterVolumeSpecName: "client-ca") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.054663 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.054725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.059273 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729" (OuterVolumeSpecName: "kube-api-access-kx729") pod "322809f5-4f4c-487e-8488-6c62bac86f8f" (UID: "322809f5-4f4c-487e-8488-6c62bac86f8f"). InnerVolumeSpecName "kube-api-access-kx729". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.061068 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh" (OuterVolumeSpecName: "kube-api-access-wtnfh") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "kube-api-access-wtnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.061289 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61e09136-e0d4-4c75-ad01-543778867411" (UID: "61e09136-e0d4-4c75-ad01-543778867411"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147357 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtnfh\" (UniqueName: \"kubernetes.io/projected/61e09136-e0d4-4c75-ad01-543778867411-kube-api-access-wtnfh\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147421 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e09136-e0d4-4c75-ad01-543778867411-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147442 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/322809f5-4f4c-487e-8488-6c62bac86f8f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147463 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147513 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147531 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147549 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e09136-e0d4-4c75-ad01-543778867411-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147566 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/322809f5-4f4c-487e-8488-6c62bac86f8f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.147583 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx729\" (UniqueName: \"kubernetes.io/projected/322809f5-4f4c-487e-8488-6c62bac86f8f-kube-api-access-kx729\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415658 4751 generic.go:334] "Generic (PLEG): container finished" podID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" exitCode=0 Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415740 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerDied","Data":"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415864 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp" event={"ID":"322809f5-4f4c-487e-8488-6c62bac86f8f","Type":"ContainerDied","Data":"f7e0d553caf37cf1c65a97cae3829801333e8d0eb24ba3398a66bb00e08506f3"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.415902 4751 scope.go:117] "RemoveContainer" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419193 4751 generic.go:334] "Generic (PLEG): container finished" podID="61e09136-e0d4-4c75-ad01-543778867411" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" exitCode=0 Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerDied","Data":"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.419302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8jsqt" event={"ID":"61e09136-e0d4-4c75-ad01-543778867411","Type":"ContainerDied","Data":"bcc3e35fa7bf77d352470a19ce3b00e0ae26473ecc7d562f4aa3b014710b8b83"} Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.446641 4751 scope.go:117] "RemoveContainer" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.447381 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b\": container with ID starting with f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b not found: ID does not exist" containerID="f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.447725 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b"} err="failed to get container status \"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b\": rpc error: code = NotFound desc = could not find container \"f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b\": container with ID starting with f941baa5ba0d3e32dd492e4e84997435c31d8bd5216b7dacf8d6df3060c1827b not found: ID does not exist" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.447771 4751 scope.go:117] "RemoveContainer" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.468596 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.472367 4751 scope.go:117] "RemoveContainer" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.474482 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3\": container with ID starting with 4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3 not found: ID does not exist" containerID="4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.474547 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3"} err="failed to get container status \"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3\": rpc error: code = NotFound desc = could not find container \"4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3\": container with ID starting with 4df18ee24c522527074d638e05b39d9ec896a8a13159255abd65c9142157efc3 not found: ID does not exist" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.478933 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8z9vp"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.487365 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.495549 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8jsqt"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.825584 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826044 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937199db-2864-42e7-bd7b-65315d94920f" containerName="installer" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826065 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="937199db-2864-42e7-bd7b-65315d94920f" containerName="installer" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826124 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826138 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826164 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826216 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:20:09 crc kubenswrapper[4751]: E0130 21:20:09.826243 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826256 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826610 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e09136-e0d4-4c75-ad01-543778867411" containerName="controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826676 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826693 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="937199db-2864-42e7-bd7b-65315d94920f" containerName="installer" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.826709 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" containerName="route-controller-manager" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.827482 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.829885 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.831844 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.832873 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.833776 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.834178 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.834428 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.837282 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.844051 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.858313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.858925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.859624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.860115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.860637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.961992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962083 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.962241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.964617 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.964976 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.965320 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.971786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.988011 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322809f5-4f4c-487e-8488-6c62bac86f8f" path="/var/lib/kubelet/pods/322809f5-4f4c-487e-8488-6c62bac86f8f/volumes" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.989203 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e09136-e0d4-4c75-ad01-543778867411" path="/var/lib/kubelet/pods/61e09136-e0d4-4c75-ad01-543778867411/volumes" Jan 30 21:20:09 crc kubenswrapper[4751]: I0130 21:20:09.993088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"controller-manager-67fbdd65b9-z58cx\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.162109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.435221 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.822254 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6"] Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.823254 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.830741 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.830973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.831091 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.831119 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.831279 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.840209 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.851372 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6"] Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-config\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651fa2d-f596-4675-a425-f8baff64a3d6-serving-cert\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876218 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-client-ca\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.876356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55rq\" (UniqueName: \"kubernetes.io/projected/6651fa2d-f596-4675-a425-f8baff64a3d6-kube-api-access-h55rq\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976887 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55rq\" (UniqueName: \"kubernetes.io/projected/6651fa2d-f596-4675-a425-f8baff64a3d6-kube-api-access-h55rq\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976938 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-config\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976979 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651fa2d-f596-4675-a425-f8baff64a3d6-serving-cert\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.976999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-client-ca\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.977963 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-client-ca\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.978193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6651fa2d-f596-4675-a425-f8baff64a3d6-config\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.989705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6651fa2d-f596-4675-a425-f8baff64a3d6-serving-cert\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:10 crc kubenswrapper[4751]: I0130 21:20:10.994886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55rq\" (UniqueName: \"kubernetes.io/projected/6651fa2d-f596-4675-a425-f8baff64a3d6-kube-api-access-h55rq\") pod \"route-controller-manager-57b9dfd8bf-xwxm6\" (UID: \"6651fa2d-f596-4675-a425-f8baff64a3d6\") " pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.140586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.472210 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerStarted","Data":"727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe"} Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.472586 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.472601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerStarted","Data":"c085dd6df8388c91766eaaa1380463bb804c12c891fe6f412357624bc97f5f67"} Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.479459 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.521511 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" podStartSLOduration=3.521473476 podStartE2EDuration="3.521473476s" podCreationTimestamp="2026-01-30 21:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:11.501084629 +0000 UTC m=+350.246907308" watchObservedRunningTime="2026-01-30 21:20:11.521473476 +0000 UTC m=+350.267296155" Jan 30 21:20:11 crc kubenswrapper[4751]: I0130 21:20:11.567223 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6"] Jan 30 21:20:11 crc kubenswrapper[4751]: W0130 21:20:11.569511 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6651fa2d_f596_4675_a425_f8baff64a3d6.slice/crio-619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180 WatchSource:0}: Error finding container 619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180: Status 404 returned error can't find the container with id 619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180 Jan 30 21:20:12 crc kubenswrapper[4751]: I0130 21:20:12.480471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" event={"ID":"6651fa2d-f596-4675-a425-f8baff64a3d6","Type":"ContainerStarted","Data":"319c9c4919ac764c36070d567a3875051ca852e988cc204a844091e978a934ba"} Jan 30 21:20:12 crc kubenswrapper[4751]: I0130 21:20:12.482070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" event={"ID":"6651fa2d-f596-4675-a425-f8baff64a3d6","Type":"ContainerStarted","Data":"619922289cfcd974a4d7cfe3325a39ac3baf4d25fe4bbf3bef013feb10514180"} Jan 30 21:20:12 crc kubenswrapper[4751]: I0130 21:20:12.501969 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" podStartSLOduration=4.501951473 podStartE2EDuration="4.501951473s" podCreationTimestamp="2026-01-30 21:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:12.501410858 +0000 UTC m=+351.247233507" watchObservedRunningTime="2026-01-30 21:20:12.501951473 +0000 UTC m=+351.247774132" Jan 30 21:20:13 crc kubenswrapper[4751]: I0130 21:20:13.487348 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:13 crc kubenswrapper[4751]: I0130 21:20:13.496111 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57b9dfd8bf-xwxm6" Jan 30 21:20:24 crc kubenswrapper[4751]: I0130 21:20:24.126903 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:20:24 crc kubenswrapper[4751]: I0130 21:20:24.127492 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.443974 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-psfpp"] Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.445557 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.468257 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-psfpp"] Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620599 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-registry-tls\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-registry-certificates\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620689 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1963e246-4713-4682-8915-12bbc2f33d95-installation-pull-secrets\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1963e246-4713-4682-8915-12bbc2f33d95-ca-trust-extracted\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfb7\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-kube-api-access-9wfb7\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.620921 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.621040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-trusted-ca\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.621120 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-bound-sa-token\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.671658 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.722780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-bound-sa-token\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.722875 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-registry-tls\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.722937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-registry-certificates\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723017 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1963e246-4713-4682-8915-12bbc2f33d95-installation-pull-secrets\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1963e246-4713-4682-8915-12bbc2f33d95-ca-trust-extracted\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfb7\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-kube-api-access-9wfb7\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.723169 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-trusted-ca\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.725120 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-trusted-ca\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.729097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1963e246-4713-4682-8915-12bbc2f33d95-registry-certificates\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.729175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1963e246-4713-4682-8915-12bbc2f33d95-ca-trust-extracted\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.732940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1963e246-4713-4682-8915-12bbc2f33d95-installation-pull-secrets\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.733442 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-registry-tls\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.745037 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-bound-sa-token\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.747182 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfb7\" (UniqueName: \"kubernetes.io/projected/1963e246-4713-4682-8915-12bbc2f33d95-kube-api-access-9wfb7\") pod \"image-registry-66df7c8f76-psfpp\" (UID: \"1963e246-4713-4682-8915-12bbc2f33d95\") " pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:34 crc kubenswrapper[4751]: I0130 21:20:34.766177 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.226492 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-psfpp"] Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.665703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" event={"ID":"1963e246-4713-4682-8915-12bbc2f33d95","Type":"ContainerStarted","Data":"f51852353928f67bedd9a60de000891ba690364ad64c236bb06988638548d5db"} Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.665768 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" event={"ID":"1963e246-4713-4682-8915-12bbc2f33d95","Type":"ContainerStarted","Data":"a0f0d247ce14cb38f91bcc803c2567541a4495a9fc6dc4d5789752f66ecd05cb"} Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.665919 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:35 crc kubenswrapper[4751]: I0130 21:20:35.692065 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" podStartSLOduration=1.692037077 podStartE2EDuration="1.692037077s" podCreationTimestamp="2026-01-30 21:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:35.691835571 +0000 UTC m=+374.437658250" watchObservedRunningTime="2026-01-30 21:20:35.692037077 +0000 UTC m=+374.437859776" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.552757 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.553665 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" containerID="cri-o://727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe" gracePeriod=30 Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.750742 4751 generic.go:334] "Generic (PLEG): container finished" podID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerID="727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe" exitCode=0 Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.750794 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerDied","Data":"727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe"} Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.948900 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.954671 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.954821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.954930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.955812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config" (OuterVolumeSpecName: "config") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.955920 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.956384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca" (OuterVolumeSpecName: "client-ca") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.956477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") pod \"56ca1bea-be0c-4187-85e8-33290c1ac419\" (UID: \"56ca1bea-be0c-4187-85e8-33290c1ac419\") " Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957539 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957903 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957933 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.957950 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56ca1bea-be0c-4187-85e8-33290c1ac419-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.960510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:48 crc kubenswrapper[4751]: I0130 21:20:48.960764 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj" (OuterVolumeSpecName: "kube-api-access-5r6dj") pod "56ca1bea-be0c-4187-85e8-33290c1ac419" (UID: "56ca1bea-be0c-4187-85e8-33290c1ac419"). InnerVolumeSpecName "kube-api-access-5r6dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.058688 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56ca1bea-be0c-4187-85e8-33290c1ac419-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.058726 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r6dj\" (UniqueName: \"kubernetes.io/projected/56ca1bea-be0c-4187-85e8-33290c1ac419-kube-api-access-5r6dj\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.759285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" event={"ID":"56ca1bea-be0c-4187-85e8-33290c1ac419","Type":"ContainerDied","Data":"c085dd6df8388c91766eaaa1380463bb804c12c891fe6f412357624bc97f5f67"} Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.759755 4751 scope.go:117] "RemoveContainer" containerID="727bd2b956937e646c881e73ea4cae6c00e8e56224d81dcf10b0bf0d1d5db9fe" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.759412 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-z58cx" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.814570 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.824842 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-z58cx"] Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.858785 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-747dd9759b-6wjt2"] Jan 30 21:20:49 crc kubenswrapper[4751]: E0130 21:20:49.860766 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.860800 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.860969 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" containerName="controller-manager" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.861588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.863938 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868113 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868296 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868207 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868442 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.868706 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.875266 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747dd9759b-6wjt2"] Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.881958 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.971262 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc546db-e9d3-40f3-9256-647759116f56-serving-cert\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.971784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-config\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.971871 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-proxy-ca-bundles\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.972080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5vm\" (UniqueName: \"kubernetes.io/projected/fbc546db-e9d3-40f3-9256-647759116f56-kube-api-access-gr5vm\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.972195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-client-ca\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:49 crc kubenswrapper[4751]: I0130 21:20:49.987184 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ca1bea-be0c-4187-85e8-33290c1ac419" path="/var/lib/kubelet/pods/56ca1bea-be0c-4187-85e8-33290c1ac419/volumes" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc546db-e9d3-40f3-9256-647759116f56-serving-cert\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-config\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-proxy-ca-bundles\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.073981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5vm\" (UniqueName: \"kubernetes.io/projected/fbc546db-e9d3-40f3-9256-647759116f56-kube-api-access-gr5vm\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.074031 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-client-ca\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.075489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-proxy-ca-bundles\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.076856 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-config\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.077048 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbc546db-e9d3-40f3-9256-647759116f56-client-ca\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.089083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbc546db-e9d3-40f3-9256-647759116f56-serving-cert\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.104497 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5vm\" (UniqueName: \"kubernetes.io/projected/fbc546db-e9d3-40f3-9256-647759116f56-kube-api-access-gr5vm\") pod \"controller-manager-747dd9759b-6wjt2\" (UID: \"fbc546db-e9d3-40f3-9256-647759116f56\") " pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.187397 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.401276 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-747dd9759b-6wjt2"] Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.768746 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" event={"ID":"fbc546db-e9d3-40f3-9256-647759116f56","Type":"ContainerStarted","Data":"c28347fb621a15c101af03728776f4e336c6f48b978982a3712d3d04acfbface"} Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.769167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" event={"ID":"fbc546db-e9d3-40f3-9256-647759116f56","Type":"ContainerStarted","Data":"388c1b696957d2112af1748b75b308cbc4f1c5fc159d891f74bc60f4c863d67b"} Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.769190 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.778841 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" Jan 30 21:20:50 crc kubenswrapper[4751]: I0130 21:20:50.812935 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-747dd9759b-6wjt2" podStartSLOduration=2.812913719 podStartE2EDuration="2.812913719s" podCreationTimestamp="2026-01-30 21:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:50.792556312 +0000 UTC m=+389.538378991" watchObservedRunningTime="2026-01-30 21:20:50.812913719 +0000 UTC m=+389.558736378" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.720306 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.721033 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-54ffx" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" containerID="cri-o://1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.741214 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.741802 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvvq8" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" containerID="cri-o://6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.753413 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.753815 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" containerID="cri-o://b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.765797 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.766219 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6v829" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" containerID="cri-o://1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.777954 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.778350 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zct7w" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" containerID="cri-o://6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" gracePeriod=30 Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.784116 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.785342 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.793557 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.826610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.826675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.826754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.927519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.927586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.927669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.929806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.949710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:52 crc kubenswrapper[4751]: I0130 21:20:52.950126 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"marketplace-operator-79b997595-76rml\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.208517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.217884 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.218462 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.226784 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.232155 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") pod \"5243a1a5-2eaa-4437-b10e-602439c7c838\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") pod \"5de678c2-f43a-44fa-ab58-259f765c3e31\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235828 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") pod \"5243a1a5-2eaa-4437-b10e-602439c7c838\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235859 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") pod \"5243a1a5-2eaa-4437-b10e-602439c7c838\" (UID: \"5243a1a5-2eaa-4437-b10e-602439c7c838\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235888 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") pod \"5de678c2-f43a-44fa-ab58-259f765c3e31\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") pod \"a59ef52d-2f47-42ac-a233-0285be317cc9\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235941 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") pod \"5de678c2-f43a-44fa-ab58-259f765c3e31\" (UID: \"5de678c2-f43a-44fa-ab58-259f765c3e31\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") pod \"a59ef52d-2f47-42ac-a233-0285be317cc9\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.235995 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") pod \"a59ef52d-2f47-42ac-a233-0285be317cc9\" (UID: \"a59ef52d-2f47-42ac-a233-0285be317cc9\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.240971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities" (OuterVolumeSpecName: "utilities") pod "a59ef52d-2f47-42ac-a233-0285be317cc9" (UID: "a59ef52d-2f47-42ac-a233-0285be317cc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.241808 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5243a1a5-2eaa-4437-b10e-602439c7c838" (UID: "5243a1a5-2eaa-4437-b10e-602439c7c838"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665" (OuterVolumeSpecName: "kube-api-access-b7665") pod "5243a1a5-2eaa-4437-b10e-602439c7c838" (UID: "5243a1a5-2eaa-4437-b10e-602439c7c838"). InnerVolumeSpecName "kube-api-access-b7665". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245233 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5243a1a5-2eaa-4437-b10e-602439c7c838" (UID: "5243a1a5-2eaa-4437-b10e-602439c7c838"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245302 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities" (OuterVolumeSpecName: "utilities") pod "5de678c2-f43a-44fa-ab58-259f765c3e31" (UID: "5de678c2-f43a-44fa-ab58-259f765c3e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.245576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d" (OuterVolumeSpecName: "kube-api-access-tng6d") pod "a59ef52d-2f47-42ac-a233-0285be317cc9" (UID: "a59ef52d-2f47-42ac-a233-0285be317cc9"). InnerVolumeSpecName "kube-api-access-tng6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.247546 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr" (OuterVolumeSpecName: "kube-api-access-dh7pr") pod "5de678c2-f43a-44fa-ab58-259f765c3e31" (UID: "5de678c2-f43a-44fa-ab58-259f765c3e31"). InnerVolumeSpecName "kube-api-access-dh7pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.291892 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.311235 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5de678c2-f43a-44fa-ab58-259f765c3e31" (UID: "5de678c2-f43a-44fa-ab58-259f765c3e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") pod \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") pod \"94e03be5-809d-49ba-9318-6222131628f5\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") pod \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337650 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") pod \"94e03be5-809d-49ba-9318-6222131628f5\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337671 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") pod \"94e03be5-809d-49ba-9318-6222131628f5\" (UID: \"94e03be5-809d-49ba-9318-6222131628f5\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") pod \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\" (UID: \"b05ec0ea-cf7e-46ce-9814-a4597ebcf238\") " Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337962 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337977 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5243a1a5-2eaa-4437-b10e-602439c7c838-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337987 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.337998 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tng6d\" (UniqueName: \"kubernetes.io/projected/a59ef52d-2f47-42ac-a233-0285be317cc9-kube-api-access-tng6d\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338010 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh7pr\" (UniqueName: \"kubernetes.io/projected/5de678c2-f43a-44fa-ab58-259f765c3e31-kube-api-access-dh7pr\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338022 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338033 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7665\" (UniqueName: \"kubernetes.io/projected/5243a1a5-2eaa-4437-b10e-602439c7c838-kube-api-access-b7665\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338044 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de678c2-f43a-44fa-ab58-259f765c3e31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.338888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities" (OuterVolumeSpecName: "utilities") pod "b05ec0ea-cf7e-46ce-9814-a4597ebcf238" (UID: "b05ec0ea-cf7e-46ce-9814-a4597ebcf238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.339832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities" (OuterVolumeSpecName: "utilities") pod "94e03be5-809d-49ba-9318-6222131628f5" (UID: "94e03be5-809d-49ba-9318-6222131628f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.341514 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g" (OuterVolumeSpecName: "kube-api-access-97l8g") pod "b05ec0ea-cf7e-46ce-9814-a4597ebcf238" (UID: "b05ec0ea-cf7e-46ce-9814-a4597ebcf238"). InnerVolumeSpecName "kube-api-access-97l8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.342134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs" (OuterVolumeSpecName: "kube-api-access-sm6bs") pod "94e03be5-809d-49ba-9318-6222131628f5" (UID: "94e03be5-809d-49ba-9318-6222131628f5"). InnerVolumeSpecName "kube-api-access-sm6bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.372801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94e03be5-809d-49ba-9318-6222131628f5" (UID: "94e03be5-809d-49ba-9318-6222131628f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.415817 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a59ef52d-2f47-42ac-a233-0285be317cc9" (UID: "a59ef52d-2f47-42ac-a233-0285be317cc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439343 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59ef52d-2f47-42ac-a233-0285be317cc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439375 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm6bs\" (UniqueName: \"kubernetes.io/projected/94e03be5-809d-49ba-9318-6222131628f5-kube-api-access-sm6bs\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439387 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97l8g\" (UniqueName: \"kubernetes.io/projected/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-kube-api-access-97l8g\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439394 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439403 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94e03be5-809d-49ba-9318-6222131628f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.439410 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.471126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b05ec0ea-cf7e-46ce-9814-a4597ebcf238" (UID: "b05ec0ea-cf7e-46ce-9814-a4597ebcf238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.540352 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05ec0ea-cf7e-46ce-9814-a4597ebcf238-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.663682 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:20:53 crc kubenswrapper[4751]: W0130 21:20:53.663770 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcb33b0_97a6_4ded_96b6_1c5bd9053977.slice/crio-644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb WatchSource:0}: Error finding container 644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb: Status 404 returned error can't find the container with id 644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.793057 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerStarted","Data":"644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.798938 4751 generic.go:334] "Generic (PLEG): container finished" podID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.798992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.799022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zct7w" event={"ID":"b05ec0ea-cf7e-46ce-9814-a4597ebcf238","Type":"ContainerDied","Data":"804ecfb30bc123f3020417772e2716aa7215e9f0bbcc895b3845fd67eade69b4"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.799038 4751 scope.go:117] "RemoveContainer" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.799151 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zct7w" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804041 4751 generic.go:334] "Generic (PLEG): container finished" podID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804114 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804132 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerDied","Data":"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.804160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tr6kv" event={"ID":"5243a1a5-2eaa-4437-b10e-602439c7c838","Type":"ContainerDied","Data":"645fc7ebe618428269447cd8603adff67691b64d1f9d9c2663bb2b21ba6d290d"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808122 4751 generic.go:334] "Generic (PLEG): container finished" podID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54ffx" event={"ID":"a59ef52d-2f47-42ac-a233-0285be317cc9","Type":"ContainerDied","Data":"3de6594576878279730bf6ad7c0a39ba28b9c63e62d19e6f38aaeefbede04797"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.808282 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54ffx" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816078 4751 generic.go:334] "Generic (PLEG): container finished" podID="94e03be5-809d-49ba-9318-6222131628f5" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816155 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v829" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.816269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v829" event={"ID":"94e03be5-809d-49ba-9318-6222131628f5","Type":"ContainerDied","Data":"960e022b4f8bb566d2fdbe8e623c147ebba25b0f4a883e6013345ce05433bda9"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820222 4751 generic.go:334] "Generic (PLEG): container finished" podID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" exitCode=0 Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820262 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvvq8" event={"ID":"5de678c2-f43a-44fa-ab58-259f765c3e31","Type":"ContainerDied","Data":"25d69c268722a1234878b44da4db4eac47a853d184bfae913c7a2d4ea1ad28d3"} Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.820374 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvvq8" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.822112 4751 scope.go:117] "RemoveContainer" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.843306 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.853990 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zct7w"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.863587 4751 scope.go:117] "RemoveContainer" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.863688 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.868836 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tr6kv"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.874042 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.878884 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvvq8"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.890919 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.890976 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v829"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.896395 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.900798 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-54ffx"] Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.926466 4751 scope.go:117] "RemoveContainer" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.928522 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7\": container with ID starting with 6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7 not found: ID does not exist" containerID="6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928586 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7"} err="failed to get container status \"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7\": rpc error: code = NotFound desc = could not find container \"6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7\": container with ID starting with 6eccfc4d6ba2618226d304c5b2bd1fb297b15f5857edfd28a3313f58debc08a7 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928613 4751 scope.go:117] "RemoveContainer" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.928935 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1\": container with ID starting with f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1 not found: ID does not exist" containerID="f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928971 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1"} err="failed to get container status \"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1\": rpc error: code = NotFound desc = could not find container \"f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1\": container with ID starting with f04d948c4bd5fbf97c9dbf36276c922630c269da2556ed764c208461a423cfb1 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.928984 4751 scope.go:117] "RemoveContainer" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.929217 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642\": container with ID starting with 3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642 not found: ID does not exist" containerID="3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.929243 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642"} err="failed to get container status \"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642\": rpc error: code = NotFound desc = could not find container \"3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642\": container with ID starting with 3559a530c75f8ff68a5cad975d02e6fac3ea4f198ad887abab6fb34ab0d38642 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.929259 4751 scope.go:117] "RemoveContainer" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.942482 4751 scope.go:117] "RemoveContainer" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.942930 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b\": container with ID starting with b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b not found: ID does not exist" containerID="b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.942956 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b"} err="failed to get container status \"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b\": rpc error: code = NotFound desc = could not find container \"b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b\": container with ID starting with b77d62c140b69b82d5cbf6eb5008135711a6580ea5f979e5e8815b4aa184e76b not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.942972 4751 scope.go:117] "RemoveContainer" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.958531 4751 scope.go:117] "RemoveContainer" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.977970 4751 scope.go:117] "RemoveContainer" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.985136 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" path="/var/lib/kubelet/pods/5243a1a5-2eaa-4437-b10e-602439c7c838/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.985692 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" path="/var/lib/kubelet/pods/5de678c2-f43a-44fa-ab58-259f765c3e31/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.986410 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e03be5-809d-49ba-9318-6222131628f5" path="/var/lib/kubelet/pods/94e03be5-809d-49ba-9318-6222131628f5/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.987514 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" path="/var/lib/kubelet/pods/a59ef52d-2f47-42ac-a233-0285be317cc9/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.988178 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" path="/var/lib/kubelet/pods/b05ec0ea-cf7e-46ce-9814-a4597ebcf238/volumes" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.997126 4751 scope.go:117] "RemoveContainer" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.997471 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2\": container with ID starting with 1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2 not found: ID does not exist" containerID="1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.997710 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2"} err="failed to get container status \"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2\": rpc error: code = NotFound desc = could not find container \"1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2\": container with ID starting with 1729cdfa83c5660b5e1741a763d71c952a65fa4fd1d132a64dc5d06c93fbbbb2 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.997747 4751 scope.go:117] "RemoveContainer" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.997982 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b\": container with ID starting with 1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b not found: ID does not exist" containerID="1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998007 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b"} err="failed to get container status \"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b\": rpc error: code = NotFound desc = could not find container \"1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b\": container with ID starting with 1ce35f5828f7898b6d403ab0ee2c2611372e65524c25b1e093fe6f0ff286146b not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998026 4751 scope.go:117] "RemoveContainer" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" Jan 30 21:20:53 crc kubenswrapper[4751]: E0130 21:20:53.998178 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170\": container with ID starting with 59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170 not found: ID does not exist" containerID="59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998197 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170"} err="failed to get container status \"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170\": rpc error: code = NotFound desc = could not find container \"59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170\": container with ID starting with 59d0d5accbd0e741a45d2c9f0929c8ef4dea19acf5f22156ad129ee6a57b7170 not found: ID does not exist" Jan 30 21:20:53 crc kubenswrapper[4751]: I0130 21:20:53.998209 4751 scope.go:117] "RemoveContainer" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.021317 4751 scope.go:117] "RemoveContainer" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.038402 4751 scope.go:117] "RemoveContainer" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.062076 4751 scope.go:117] "RemoveContainer" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.062600 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6\": container with ID starting with 1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6 not found: ID does not exist" containerID="1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.062661 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6"} err="failed to get container status \"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6\": rpc error: code = NotFound desc = could not find container \"1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6\": container with ID starting with 1ddc49d3ac552029a70cba19f836098840b08d811a4f18ddc5887959c1deeaf6 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.062690 4751 scope.go:117] "RemoveContainer" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.062997 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855\": container with ID starting with cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855 not found: ID does not exist" containerID="cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063025 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855"} err="failed to get container status \"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855\": rpc error: code = NotFound desc = could not find container \"cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855\": container with ID starting with cc10611800a185029b7504531f1aef239f78d89a7c893703ac97a90606882855 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063045 4751 scope.go:117] "RemoveContainer" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.063570 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e\": container with ID starting with 2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e not found: ID does not exist" containerID="2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063660 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e"} err="failed to get container status \"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e\": rpc error: code = NotFound desc = could not find container \"2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e\": container with ID starting with 2e2bfeb85bee453c5562087b8952080d63fb468a903b18dd2f8152c589c7b24e not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.063730 4751 scope.go:117] "RemoveContainer" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.078461 4751 scope.go:117] "RemoveContainer" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.094793 4751 scope.go:117] "RemoveContainer" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.110687 4751 scope.go:117] "RemoveContainer" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.111083 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470\": container with ID starting with 6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470 not found: ID does not exist" containerID="6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111123 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470"} err="failed to get container status \"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470\": rpc error: code = NotFound desc = could not find container \"6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470\": container with ID starting with 6374876bc7ed115e17f1c5d36bfa76b152a97c3a41fab6547007a48a90913470 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111149 4751 scope.go:117] "RemoveContainer" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.111398 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89\": container with ID starting with 32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89 not found: ID does not exist" containerID="32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111421 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89"} err="failed to get container status \"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89\": rpc error: code = NotFound desc = could not find container \"32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89\": container with ID starting with 32572f27c9ea03ce6c7b9dc8b2a5e53bbd80e2ecb4680f00bc3edf45a75a5c89 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111434 4751 scope.go:117] "RemoveContainer" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" Jan 30 21:20:54 crc kubenswrapper[4751]: E0130 21:20:54.111617 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478\": container with ID starting with c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478 not found: ID does not exist" containerID="c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.111632 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478"} err="failed to get container status \"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478\": rpc error: code = NotFound desc = could not find container \"c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478\": container with ID starting with c1ef4ef3016d2aeab48af690c5e15404a48316a180fc721a730e50ce33561478 not found: ID does not exist" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127097 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127135 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127174 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127784 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.127833 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa" gracePeriod=600 Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.776255 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-psfpp" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.829348 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa" exitCode=0 Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.829368 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa"} Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.829495 4751 scope.go:117] "RemoveContainer" containerID="804fcddb2cc1601c5ab5ccff3221ed963625877eb0e2269c57d0ce3fc27ba125" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.847804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerStarted","Data":"6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744"} Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.850046 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.853958 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.862870 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:20:54 crc kubenswrapper[4751]: I0130 21:20:54.906272 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" podStartSLOduration=2.90620024 podStartE2EDuration="2.90620024s" podCreationTimestamp="2026-01-30 21:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:20:54.881560066 +0000 UTC m=+393.627382745" watchObservedRunningTime="2026-01-30 21:20:54.90620024 +0000 UTC m=+393.652022899" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.230915 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231459 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231482 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231497 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231507 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231516 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231524 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231533 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231540 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231554 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231574 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231581 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231594 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231601 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231612 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231620 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231630 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231639 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231650 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231657 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231668 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231676 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231684 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231691 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="extract-utilities" Jan 30 21:20:55 crc kubenswrapper[4751]: E0130 21:20:55.231703 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231710 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="extract-content" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231811 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5243a1a5-2eaa-4437-b10e-602439c7c838" containerName="marketplace-operator" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231828 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59ef52d-2f47-42ac-a233-0285be317cc9" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231838 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e03be5-809d-49ba-9318-6222131628f5" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231853 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de678c2-f43a-44fa-ab58-259f765c3e31" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.231862 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05ec0ea-cf7e-46ce-9814-a4597ebcf238" containerName="registry-server" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.232690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.234491 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.286297 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.372062 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.372128 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.372168 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.473598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.473665 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.473710 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.474241 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.474543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.493407 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"redhat-marketplace-btf57\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.558412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.902995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6"} Jan 30 21:20:55 crc kubenswrapper[4751]: I0130 21:20:55.961828 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:20:55 crc kubenswrapper[4751]: W0130 21:20:55.969554 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a24b1f1_0656_41ef_826d_c6c40f96b470.slice/crio-e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b WatchSource:0}: Error finding container e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b: Status 404 returned error can't find the container with id e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.210033 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.211173 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.213267 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.223123 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.385852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.386250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.386291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.487970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.488097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.516117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"redhat-operators-p4hxc\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.539453 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.914676 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerID="bbac4a5fe3fc00609faebe7f98affa8ef8408a492e79ad4eb2e51f42853acfd7" exitCode=0 Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.916917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"bbac4a5fe3fc00609faebe7f98affa8ef8408a492e79ad4eb2e51f42853acfd7"} Jan 30 21:20:56 crc kubenswrapper[4751]: I0130 21:20:56.917419 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerStarted","Data":"e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b"} Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.005509 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:20:57 crc kubenswrapper[4751]: W0130 21:20:57.012011 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448ce159_6181_433b_a28a_d00b9240b5af.slice/crio-73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507 WatchSource:0}: Error finding container 73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507: Status 404 returned error can't find the container with id 73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507 Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.922056 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerStarted","Data":"b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f"} Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.924023 4751 generic.go:334] "Generic (PLEG): container finished" podID="448ce159-6181-433b-a28a-d00b9240b5af" containerID="10d985df0a9120f84aedb7a8499aa2e73fa1eb168ac9332a258bbeadbd76d96e" exitCode=0 Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.924072 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"10d985df0a9120f84aedb7a8499aa2e73fa1eb168ac9332a258bbeadbd76d96e"} Jan 30 21:20:57 crc kubenswrapper[4751]: I0130 21:20:57.924116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerStarted","Data":"73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507"} Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.014817 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.016302 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.061863 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.067968 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.106305 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.106638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.106775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208073 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.208649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.230679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"certified-operators-twcnd\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.374816 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.774577 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:20:58 crc kubenswrapper[4751]: W0130 21:20:58.782807 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac49c6a1_fa74_49f3_ba94_c5a469df4a93.slice/crio-1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a WatchSource:0}: Error finding container 1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a: Status 404 returned error can't find the container with id 1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.929929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerStarted","Data":"1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a"} Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.931546 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerID="b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f" exitCode=0 Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.931597 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f"} Jan 30 21:20:58 crc kubenswrapper[4751]: I0130 21:20:58.933127 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerStarted","Data":"8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5"} Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.010884 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.012262 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.014188 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.018912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.120773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.120845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.120884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.222185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.222502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.222637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.224079 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.224928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.243438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"community-operators-bd2xs\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.360484 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.728257 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:20:59 crc kubenswrapper[4751]: W0130 21:20:59.737453 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda791b2a3_aead_4130_bdfa_e219f2d47593.slice/crio-434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626 WatchSource:0}: Error finding container 434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626: Status 404 returned error can't find the container with id 434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626 Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.938873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerStarted","Data":"434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626"} Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.941388 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerID="bb7468b7d7c0079e6174ab6fab8062e8d6fe8734e0fcc33a217d950b9c4934f4" exitCode=0 Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.941452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"bb7468b7d7c0079e6174ab6fab8062e8d6fe8734e0fcc33a217d950b9c4934f4"} Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.944282 4751 generic.go:334] "Generic (PLEG): container finished" podID="448ce159-6181-433b-a28a-d00b9240b5af" containerID="8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5" exitCode=0 Jan 30 21:20:59 crc kubenswrapper[4751]: I0130 21:20:59.944343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.951458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerStarted","Data":"7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.953176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerStarted","Data":"a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.955580 4751 generic.go:334] "Generic (PLEG): container finished" podID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerID="ab432e40787fc0f8c27455630b3e162f083b0e2d799d4a3e7e2a6dfb88ac3b16" exitCode=0 Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.955612 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"ab432e40787fc0f8c27455630b3e162f083b0e2d799d4a3e7e2a6dfb88ac3b16"} Jan 30 21:21:00 crc kubenswrapper[4751]: I0130 21:21:00.990887 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btf57" podStartSLOduration=3.215963105 podStartE2EDuration="5.990871471s" podCreationTimestamp="2026-01-30 21:20:55 +0000 UTC" firstStartedPulling="2026-01-30 21:20:56.927830133 +0000 UTC m=+395.673652782" lastFinishedPulling="2026-01-30 21:20:59.702738489 +0000 UTC m=+398.448561148" observedRunningTime="2026-01-30 21:21:00.971638461 +0000 UTC m=+399.717461140" watchObservedRunningTime="2026-01-30 21:21:00.990871471 +0000 UTC m=+399.736694120" Jan 30 21:21:01 crc kubenswrapper[4751]: I0130 21:21:01.039674 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p4hxc" podStartSLOduration=2.363838826 podStartE2EDuration="5.039653928s" podCreationTimestamp="2026-01-30 21:20:56 +0000 UTC" firstStartedPulling="2026-01-30 21:20:57.925601466 +0000 UTC m=+396.671424115" lastFinishedPulling="2026-01-30 21:21:00.601416568 +0000 UTC m=+399.347239217" observedRunningTime="2026-01-30 21:21:01.03789329 +0000 UTC m=+399.783715939" watchObservedRunningTime="2026-01-30 21:21:01.039653928 +0000 UTC m=+399.785476597" Jan 30 21:21:01 crc kubenswrapper[4751]: I0130 21:21:01.962468 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerID="2c82591f69d50ae83fda7597991bd617784911392dd33cf4f25ec660904d8e1e" exitCode=0 Jan 30 21:21:01 crc kubenswrapper[4751]: I0130 21:21:01.962668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"2c82591f69d50ae83fda7597991bd617784911392dd33cf4f25ec660904d8e1e"} Jan 30 21:21:02 crc kubenswrapper[4751]: I0130 21:21:02.970229 4751 generic.go:334] "Generic (PLEG): container finished" podID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerID="046ce5e1f77fe5269aa0733495a774c7014a135ba89622c5ae3b5e42a5e2bcc2" exitCode=0 Jan 30 21:21:02 crc kubenswrapper[4751]: I0130 21:21:02.970470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"046ce5e1f77fe5269aa0733495a774c7014a135ba89622c5ae3b5e42a5e2bcc2"} Jan 30 21:21:04 crc kubenswrapper[4751]: I0130 21:21:04.982082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerStarted","Data":"da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29"} Jan 30 21:21:04 crc kubenswrapper[4751]: I0130 21:21:04.983822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerStarted","Data":"960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216"} Jan 30 21:21:04 crc kubenswrapper[4751]: I0130 21:21:04.997290 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-twcnd" podStartSLOduration=4.159214164 podStartE2EDuration="7.997274789s" podCreationTimestamp="2026-01-30 21:20:57 +0000 UTC" firstStartedPulling="2026-01-30 21:20:59.942578713 +0000 UTC m=+398.688401362" lastFinishedPulling="2026-01-30 21:21:03.780639338 +0000 UTC m=+402.526461987" observedRunningTime="2026-01-30 21:21:04.996052817 +0000 UTC m=+403.741875466" watchObservedRunningTime="2026-01-30 21:21:04.997274789 +0000 UTC m=+403.743097448" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.015287 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bd2xs" podStartSLOduration=4.095982573 podStartE2EDuration="7.015267056s" podCreationTimestamp="2026-01-30 21:20:58 +0000 UTC" firstStartedPulling="2026-01-30 21:21:00.960007508 +0000 UTC m=+399.705830167" lastFinishedPulling="2026-01-30 21:21:03.879292001 +0000 UTC m=+402.625114650" observedRunningTime="2026-01-30 21:21:05.014987208 +0000 UTC m=+403.760809857" watchObservedRunningTime="2026-01-30 21:21:05.015267056 +0000 UTC m=+403.761089705" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.559686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.560000 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:05 crc kubenswrapper[4751]: I0130 21:21:05.611161 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:06 crc kubenswrapper[4751]: I0130 21:21:06.034148 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:21:06 crc kubenswrapper[4751]: I0130 21:21:06.540246 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:06 crc kubenswrapper[4751]: I0130 21:21:06.540673 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:07 crc kubenswrapper[4751]: I0130 21:21:07.589598 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4hxc" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" probeResult="failure" output=< Jan 30 21:21:07 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:21:07 crc kubenswrapper[4751]: > Jan 30 21:21:08 crc kubenswrapper[4751]: I0130 21:21:08.375550 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:08 crc kubenswrapper[4751]: I0130 21:21:08.376135 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:08 crc kubenswrapper[4751]: I0130 21:21:08.430257 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.053959 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.361456 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.361608 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:09 crc kubenswrapper[4751]: I0130 21:21:09.423814 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:10 crc kubenswrapper[4751]: I0130 21:21:10.046275 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:21:16 crc kubenswrapper[4751]: I0130 21:21:16.591869 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:16 crc kubenswrapper[4751]: I0130 21:21:16.649706 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:21:19 crc kubenswrapper[4751]: I0130 21:21:19.915156 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" containerID="cri-o://28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" gracePeriod=30 Jan 30 21:21:20 crc kubenswrapper[4751]: I0130 21:21:20.399753 4751 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9lsr5 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.16:5000/healthz\": dial tcp 10.217.0.16:5000: connect: connection refused" start-of-body= Jan 30 21:21:20 crc kubenswrapper[4751]: I0130 21:21:20.399808 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.16:5000/healthz\": dial tcp 10.217.0.16:5000: connect: connection refused" Jan 30 21:21:20 crc kubenswrapper[4751]: I0130 21:21:20.987017 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.070913 4751 generic.go:334] "Generic (PLEG): container finished" podID="73d0a80a-e569-428a-b251-33f28e06fffd" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" exitCode=0 Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071024 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerDied","Data":"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51"} Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071212 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" event={"ID":"73d0a80a-e569-428a-b251-33f28e06fffd","Type":"ContainerDied","Data":"af1fafb4fa1bc5d4e5549e32e14665bb190720767667d7915533461f80e83d20"} Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071300 4751 scope.go:117] "RemoveContainer" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.071618 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9lsr5" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.091551 4751 scope.go:117] "RemoveContainer" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" Jan 30 21:21:21 crc kubenswrapper[4751]: E0130 21:21:21.092098 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51\": container with ID starting with 28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51 not found: ID does not exist" containerID="28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.092140 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51"} err="failed to get container status \"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51\": rpc error: code = NotFound desc = could not find container \"28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51\": container with ID starting with 28cd6c03baa20199626418ec8759b36e2e4744509c9dd86f5db386c640e77f51 not found: ID does not exist" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175191 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175286 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175344 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175555 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175605 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.175626 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") pod \"73d0a80a-e569-428a-b251-33f28e06fffd\" (UID: \"73d0a80a-e569-428a-b251-33f28e06fffd\") " Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.176273 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.177589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.182128 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.184221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d" (OuterVolumeSpecName: "kube-api-access-2cp5d") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "kube-api-access-2cp5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.184812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.185607 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.216214 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.238128 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "73d0a80a-e569-428a-b251-33f28e06fffd" (UID: "73d0a80a-e569-428a-b251-33f28e06fffd"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277147 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277180 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73d0a80a-e569-428a-b251-33f28e06fffd-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277191 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277199 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cp5d\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-kube-api-access-2cp5d\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277208 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73d0a80a-e569-428a-b251-33f28e06fffd-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277215 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73d0a80a-e569-428a-b251-33f28e06fffd-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.277223 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73d0a80a-e569-428a-b251-33f28e06fffd-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.403175 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.413495 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9lsr5"] Jan 30 21:21:21 crc kubenswrapper[4751]: I0130 21:21:21.983127 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" path="/var/lib/kubelet/pods/73d0a80a-e569-428a-b251-33f28e06fffd/volumes" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.619551 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg"] Jan 30 21:21:24 crc kubenswrapper[4751]: E0130 21:21:24.621782 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.622043 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.622536 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d0a80a-e569-428a-b251-33f28e06fffd" containerName="registry" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.623545 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.627227 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.629533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg"] Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.629872 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.630180 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.630458 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.630842 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.823390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/b354db71-ccf5-4280-86a3-faf88514fb9d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.823480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvm8\" (UniqueName: \"kubernetes.io/projected/b354db71-ccf5-4280-86a3-faf88514fb9d-kube-api-access-jxvm8\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.823566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/b354db71-ccf5-4280-86a3-faf88514fb9d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.925112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/b354db71-ccf5-4280-86a3-faf88514fb9d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.925202 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvm8\" (UniqueName: \"kubernetes.io/projected/b354db71-ccf5-4280-86a3-faf88514fb9d-kube-api-access-jxvm8\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.925305 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/b354db71-ccf5-4280-86a3-faf88514fb9d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.927084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/b354db71-ccf5-4280-86a3-faf88514fb9d-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.933236 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/b354db71-ccf5-4280-86a3-faf88514fb9d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:24 crc kubenswrapper[4751]: I0130 21:21:24.956429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvm8\" (UniqueName: \"kubernetes.io/projected/b354db71-ccf5-4280-86a3-faf88514fb9d-kube-api-access-jxvm8\") pod \"cluster-monitoring-operator-6d5b84845-kfrjg\" (UID: \"b354db71-ccf5-4280-86a3-faf88514fb9d\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:25 crc kubenswrapper[4751]: I0130 21:21:25.256455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" Jan 30 21:21:25 crc kubenswrapper[4751]: I0130 21:21:25.721240 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg"] Jan 30 21:21:25 crc kubenswrapper[4751]: W0130 21:21:25.728945 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb354db71_ccf5_4280_86a3_faf88514fb9d.slice/crio-45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635 WatchSource:0}: Error finding container 45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635: Status 404 returned error can't find the container with id 45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635 Jan 30 21:21:26 crc kubenswrapper[4751]: I0130 21:21:26.105917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" event={"ID":"b354db71-ccf5-4280-86a3-faf88514fb9d","Type":"ContainerStarted","Data":"45e4b1a29aca1e926794eb3c64f21a5c1ac916d5b573db998a665277bbe51635"} Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.912339 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn"] Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.913621 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.916806 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.916919 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-5ttch" Jan 30 21:21:28 crc kubenswrapper[4751]: I0130 21:21:28.921286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn"] Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.095304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c9c572af-7f6f-4be7-b19e-7adaff281d9d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2w6rn\" (UID: \"c9c572af-7f6f-4be7-b19e-7adaff281d9d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.124441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" event={"ID":"b354db71-ccf5-4280-86a3-faf88514fb9d","Type":"ContainerStarted","Data":"34dfb55120c98144a517444ef52397b872626df176854d7005a33e6525ee98ad"} Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.196528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c9c572af-7f6f-4be7-b19e-7adaff281d9d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2w6rn\" (UID: \"c9c572af-7f6f-4be7-b19e-7adaff281d9d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.212091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c9c572af-7f6f-4be7-b19e-7adaff281d9d-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2w6rn\" (UID: \"c9c572af-7f6f-4be7-b19e-7adaff281d9d\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.230130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.687601 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kfrjg" podStartSLOduration=3.181566757 podStartE2EDuration="5.687579044s" podCreationTimestamp="2026-01-30 21:21:24 +0000 UTC" firstStartedPulling="2026-01-30 21:21:25.734056942 +0000 UTC m=+424.479879631" lastFinishedPulling="2026-01-30 21:21:28.240069269 +0000 UTC m=+426.985891918" observedRunningTime="2026-01-30 21:21:29.153206019 +0000 UTC m=+427.899028678" watchObservedRunningTime="2026-01-30 21:21:29.687579044 +0000 UTC m=+428.433401713" Jan 30 21:21:29 crc kubenswrapper[4751]: I0130 21:21:29.689879 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn"] Jan 30 21:21:30 crc kubenswrapper[4751]: I0130 21:21:30.131363 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" event={"ID":"c9c572af-7f6f-4be7-b19e-7adaff281d9d","Type":"ContainerStarted","Data":"28c0fd8c034a9d8717f4e0c385c1f937434c304e48b8a698ad2927f5f5b754bc"} Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.146452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" event={"ID":"c9c572af-7f6f-4be7-b19e-7adaff281d9d","Type":"ContainerStarted","Data":"ae4d18660aee0924d88aac7436bd40570778571b35b57d7edc4a5d979543ceb9"} Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.146900 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.154951 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.168523 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2w6rn" podStartSLOduration=2.623716995 podStartE2EDuration="4.168507675s" podCreationTimestamp="2026-01-30 21:21:28 +0000 UTC" firstStartedPulling="2026-01-30 21:21:29.703224056 +0000 UTC m=+428.449046715" lastFinishedPulling="2026-01-30 21:21:31.248014746 +0000 UTC m=+429.993837395" observedRunningTime="2026-01-30 21:21:32.166206633 +0000 UTC m=+430.912029322" watchObservedRunningTime="2026-01-30 21:21:32.168507675 +0000 UTC m=+430.914330324" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.979089 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nfkcx"] Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.980654 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.983622 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-5qnnt" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.983672 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.984567 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 30 21:21:32 crc kubenswrapper[4751]: I0130 21:21:32.984600 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.002653 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nfkcx"] Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049091 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/68479b95-39ca-4900-af4e-ee0c7d98998c-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b4qp\" (UniqueName: \"kubernetes.io/projected/68479b95-39ca-4900-af4e-ee0c7d98998c-kube-api-access-2b4qp\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.049616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/68479b95-39ca-4900-af4e-ee0c7d98998c-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156074 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b4qp\" (UniqueName: \"kubernetes.io/projected/68479b95-39ca-4900-af4e-ee0c7d98998c-kube-api-access-2b4qp\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.156122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: E0130 21:21:33.156262 4751 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Jan 30 21:21:33 crc kubenswrapper[4751]: E0130 21:21:33.156311 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls podName:68479b95-39ca-4900-af4e-ee0c7d98998c nodeName:}" failed. No retries permitted until 2026-01-30 21:21:33.656294738 +0000 UTC m=+432.402117387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls") pod "prometheus-operator-db54df47d-nfkcx" (UID: "68479b95-39ca-4900-af4e-ee0c7d98998c") : secret "prometheus-operator-tls" not found Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.157350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/68479b95-39ca-4900-af4e-ee0c7d98998c-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.175767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.181457 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b4qp\" (UniqueName: \"kubernetes.io/projected/68479b95-39ca-4900-af4e-ee0c7d98998c-kube-api-access-2b4qp\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.661065 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.670420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/68479b95-39ca-4900-af4e-ee0c7d98998c-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nfkcx\" (UID: \"68479b95-39ca-4900-af4e-ee0c7d98998c\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:33 crc kubenswrapper[4751]: I0130 21:21:33.913792 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" Jan 30 21:21:34 crc kubenswrapper[4751]: I0130 21:21:34.379418 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nfkcx"] Jan 30 21:21:34 crc kubenswrapper[4751]: W0130 21:21:34.389199 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68479b95_39ca_4900_af4e_ee0c7d98998c.slice/crio-6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028 WatchSource:0}: Error finding container 6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028: Status 404 returned error can't find the container with id 6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028 Jan 30 21:21:35 crc kubenswrapper[4751]: I0130 21:21:35.182157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" event={"ID":"68479b95-39ca-4900-af4e-ee0c7d98998c","Type":"ContainerStarted","Data":"6a4e80040e5a9add0247d290704f0018c72e787090cea37c52a67e1f3a56c028"} Jan 30 21:21:36 crc kubenswrapper[4751]: I0130 21:21:36.189020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" event={"ID":"68479b95-39ca-4900-af4e-ee0c7d98998c","Type":"ContainerStarted","Data":"576994ff244113730f86bf9e0877acb2855268db8dede46691fcb92d880c6d76"} Jan 30 21:21:36 crc kubenswrapper[4751]: I0130 21:21:36.189317 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" event={"ID":"68479b95-39ca-4900-af4e-ee0c7d98998c","Type":"ContainerStarted","Data":"966198049e04166800964daf2657a5b3d91bac4c8f9e0762697b11d7c89f6531"} Jan 30 21:21:36 crc kubenswrapper[4751]: I0130 21:21:36.216136 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-nfkcx" podStartSLOduration=2.907232624 podStartE2EDuration="4.216109266s" podCreationTimestamp="2026-01-30 21:21:32 +0000 UTC" firstStartedPulling="2026-01-30 21:21:34.392840878 +0000 UTC m=+433.138663537" lastFinishedPulling="2026-01-30 21:21:35.70171753 +0000 UTC m=+434.447540179" observedRunningTime="2026-01-30 21:21:36.211044509 +0000 UTC m=+434.956867198" watchObservedRunningTime="2026-01-30 21:21:36.216109266 +0000 UTC m=+434.961931955" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.358477 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.359801 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.362066 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.362353 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.362517 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-px4hv" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.363494 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.378892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.389400 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-59fm8"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.390820 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.393202 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.394443 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ch2nh" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.394637 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.394663 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.397591 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.403924 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-jxfsm" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.403952 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.403925 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.409149 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c"] Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440724 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440776 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/506a5b6f-daef-43b6-a780-a6c727c076fe-metrics-client-ca\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-root\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftwg5\" (UniqueName: \"kubernetes.io/projected/6b887fa7-4c67-4c26-86cb-e4d18c024c03-kube-api-access-ftwg5\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-sys\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440871 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d3306c1e-22bb-4266-8ede-1a4acb3e3152-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf24\" (UniqueName: \"kubernetes.io/projected/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-api-access-lvf24\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-wtmp\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.440994 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-textfile\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-tls\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441058 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9qg8\" (UniqueName: \"kubernetes.io/projected/506a5b6f-daef-43b6-a780-a6c727c076fe-kube-api-access-k9qg8\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.441101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b887fa7-4c67-4c26-86cb-e4d18c024c03-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542199 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9qg8\" (UniqueName: \"kubernetes.io/projected/506a5b6f-daef-43b6-a780-a6c727c076fe-kube-api-access-k9qg8\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542253 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b887fa7-4c67-4c26-86cb-e4d18c024c03-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.542302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.543224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.544152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b887fa7-4c67-4c26-86cb-e4d18c024c03-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.544910 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/506a5b6f-daef-43b6-a780-a6c727c076fe-metrics-client-ca\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.544987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-root\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-sys\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545038 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftwg5\" (UniqueName: \"kubernetes.io/projected/6b887fa7-4c67-4c26-86cb-e4d18c024c03-kube-api-access-ftwg5\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545165 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d3306c1e-22bb-4266-8ede-1a4acb3e3152-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf24\" (UniqueName: \"kubernetes.io/projected/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-api-access-lvf24\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-wtmp\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-textfile\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545365 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-tls\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545451 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/506a5b6f-daef-43b6-a780-a6c727c076fe-metrics-client-ca\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.545571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-sys\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.546767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d3306c1e-22bb-4266-8ede-1a4acb3e3152-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.548907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.549591 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.549802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-textfile\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.549879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-wtmp\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.550114 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d3306c1e-22bb-4266-8ede-1a4acb3e3152-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.551417 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b887fa7-4c67-4c26-86cb-e4d18c024c03-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.553102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.553143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/506a5b6f-daef-43b6-a780-a6c727c076fe-root\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.553566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/506a5b6f-daef-43b6-a780-a6c727c076fe-node-exporter-tls\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:38 crc kubenswrapper[4751]: I0130 21:21:38.561794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.261605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftwg5\" (UniqueName: \"kubernetes.io/projected/6b887fa7-4c67-4c26-86cb-e4d18c024c03-kube-api-access-ftwg5\") pod \"openshift-state-metrics-566fddb674-8qt5c\" (UID: \"6b887fa7-4c67-4c26-86cb-e4d18c024c03\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.263578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9qg8\" (UniqueName: \"kubernetes.io/projected/506a5b6f-daef-43b6-a780-a6c727c076fe-kube-api-access-k9qg8\") pod \"node-exporter-59fm8\" (UID: \"506a5b6f-daef-43b6-a780-a6c727c076fe\") " pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.278706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf24\" (UniqueName: \"kubernetes.io/projected/d3306c1e-22bb-4266-8ede-1a4acb3e3152-kube-api-access-lvf24\") pod \"kube-state-metrics-777cb5bd5d-m65xp\" (UID: \"d3306c1e-22bb-4266-8ede-1a4acb3e3152\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.317995 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-59fm8" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.355169 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" Jan 30 21:21:39 crc kubenswrapper[4751]: W0130 21:21:39.386424 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod506a5b6f_daef_43b6_a780_a6c727c076fe.slice/crio-362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b WatchSource:0}: Error finding container 362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b: Status 404 returned error can't find the container with id 362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.574061 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.640751 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.642527 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.645682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.645869 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.645999 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.646115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647678 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647747 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647787 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-g4pzr" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.647977 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.653107 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.667103 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762662 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-volume\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762683 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-web-config\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762706 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762735 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpp54\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-kube-api-access-kpp54\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762849 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-out\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762914 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.762942 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.763023 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.798548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c"] Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864339 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864759 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-volume\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-web-config\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864892 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpp54\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-kube-api-access-kpp54\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.864934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-out\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.868409 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.868515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.868692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08b15b-9f9e-4437-9222-25bb2f84216e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.871813 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-out\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.872161 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-web-config\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.872405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.872842 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-config-volume\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.873075 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.873085 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.874020 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.874130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6e08b15b-9f9e-4437-9222-25bb2f84216e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.882513 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpp54\" (UniqueName: \"kubernetes.io/projected/6e08b15b-9f9e-4437-9222-25bb2f84216e-kube-api-access-kpp54\") pod \"alertmanager-main-0\" (UID: \"6e08b15b-9f9e-4437-9222-25bb2f84216e\") " pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.958316 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 30 21:21:39 crc kubenswrapper[4751]: I0130 21:21:39.992475 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp"] Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.240767 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"748cb8efb35419cf9fccc358a1e0878120a9068199369cf09dc8737aae54117c"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.243218 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"4d2a318baf3b452b71e9e20d5b75d98bd306f018832de1625220c87aaf9ff3f4"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.243258 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"f579750255028ba6024bd98fe7032f40547cb01911e21e5c163dc8b3207dae4e"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.243268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"d1e15a310a4d8212a71260df00f416466c10c6717211dad84b3b0c4690e614bb"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.244358 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerStarted","Data":"362e282743df3ca3aeda5b20c0bdb9051743509b0a7179a1b4db5e9d8ca89b1b"} Jan 30 21:21:40 crc kubenswrapper[4751]: I0130 21:21:40.403050 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 30 21:21:40 crc kubenswrapper[4751]: W0130 21:21:40.570695 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e08b15b_9f9e_4437_9222_25bb2f84216e.slice/crio-ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d WatchSource:0}: Error finding container ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d: Status 404 returned error can't find the container with id ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.250868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"ca7de6e4c0a58160cce1757d31080c82689c0a4574cd7c713a50436982d4bc6d"} Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.252795 4751 generic.go:334] "Generic (PLEG): container finished" podID="506a5b6f-daef-43b6-a780-a6c727c076fe" containerID="6ac0be97ad672f4e7248d8ca23810c59cbf2b6b0cfe1c3383a76acf4abf73010" exitCode=0 Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.252848 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerDied","Data":"6ac0be97ad672f4e7248d8ca23810c59cbf2b6b0cfe1c3383a76acf4abf73010"} Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.544398 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn"] Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.547016 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.548429 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.548797 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549088 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549514 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4s5l3hluq0o23" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.549874 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.553575 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4hwb6" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.566980 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn"] Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.587944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.587985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588002 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/66e2e180-25ad-48cf-90fa-cb472fc3f248-metrics-client-ca\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588066 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlmd\" (UniqueName: \"kubernetes.io/projected/66e2e180-25ad-48cf-90fa-cb472fc3f248-kube-api-access-wqlmd\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-grpc-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.588338 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlmd\" (UniqueName: \"kubernetes.io/projected/66e2e180-25ad-48cf-90fa-cb472fc3f248-kube-api-access-wqlmd\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690089 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-grpc-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690211 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690243 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/66e2e180-25ad-48cf-90fa-cb472fc3f248-metrics-client-ca\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.690264 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.692012 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/66e2e180-25ad-48cf-90fa-cb472fc3f248-metrics-client-ca\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.699920 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.700762 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-grpc-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.701750 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.705889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.707932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.708446 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/66e2e180-25ad-48cf-90fa-cb472fc3f248-secret-thanos-querier-tls\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.717771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlmd\" (UniqueName: \"kubernetes.io/projected/66e2e180-25ad-48cf-90fa-cb472fc3f248-kube-api-access-wqlmd\") pod \"thanos-querier-7cb485bf5d-zbqfn\" (UID: \"66e2e180-25ad-48cf-90fa-cb472fc3f248\") " pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:41 crc kubenswrapper[4751]: I0130 21:21:41.880579 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:42 crc kubenswrapper[4751]: I0130 21:21:42.738258 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn"] Jan 30 21:21:42 crc kubenswrapper[4751]: W0130 21:21:42.746631 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e2e180_25ad_48cf_90fa_cb472fc3f248.slice/crio-6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186 WatchSource:0}: Error finding container 6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186: Status 404 returned error can't find the container with id 6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186 Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.188925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.189771 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.199314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.267381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerStarted","Data":"76629e07c62a68371af62ff81941845c90f628dd02ee3184c02d4b8cb2ff0b1b"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.267432 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-59fm8" event={"ID":"506a5b6f-daef-43b6-a780-a6c727c076fe","Type":"ContainerStarted","Data":"aa4225b5d34351673317cd1ff7134de28cfcc3f9bc81a97ec40244955f13b417"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.269913 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"f1c1e070c93ae770acf66c9f6da146f75a04f8c8776810b568aa8979945a5785"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.269969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"39a44bd5757e3088990fdf8fbc71ca13e4fefc329fa942a20c5e39689f78b960"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.269993 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" event={"ID":"d3306c1e-22bb-4266-8ede-1a4acb3e3152","Type":"ContainerStarted","Data":"65a0710b2c734b91de3bd04ff1bb61972ea696a8f11ef2f25d3b56b4d5b75e9d"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.271420 4751 generic.go:334] "Generic (PLEG): container finished" podID="6e08b15b-9f9e-4437-9222-25bb2f84216e" containerID="72074fff18d8b31892f38fedacd663120f7dc5e1c6d79135055a34214329a2fc" exitCode=0 Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.271470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerDied","Data":"72074fff18d8b31892f38fedacd663120f7dc5e1c6d79135055a34214329a2fc"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.273606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" event={"ID":"6b887fa7-4c67-4c26-86cb-e4d18c024c03","Type":"ContainerStarted","Data":"92f219ed510797cff85970e4cff400d2e89ec5f6cd001553e062bb517a6cc2b5"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.274557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"6909ac05d8e920bea233140d5eb05ffe48b9a1d8d31299e484cd62a85af8c186"} Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.292372 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-59fm8" podStartSLOduration=4.076673917 podStartE2EDuration="5.292352374s" podCreationTimestamp="2026-01-30 21:21:38 +0000 UTC" firstStartedPulling="2026-01-30 21:21:39.393184619 +0000 UTC m=+438.139007268" lastFinishedPulling="2026-01-30 21:21:40.608863076 +0000 UTC m=+439.354685725" observedRunningTime="2026-01-30 21:21:43.288279424 +0000 UTC m=+442.034102093" watchObservedRunningTime="2026-01-30 21:21:43.292352374 +0000 UTC m=+442.038175033" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.308989 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309174 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309256 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.309347 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.339427 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8qt5c" podStartSLOduration=3.113118848 podStartE2EDuration="5.339409614s" podCreationTimestamp="2026-01-30 21:21:38 +0000 UTC" firstStartedPulling="2026-01-30 21:21:40.073868414 +0000 UTC m=+438.819691063" lastFinishedPulling="2026-01-30 21:21:42.30015918 +0000 UTC m=+441.045981829" observedRunningTime="2026-01-30 21:21:43.337610696 +0000 UTC m=+442.083433335" watchObservedRunningTime="2026-01-30 21:21:43.339409614 +0000 UTC m=+442.085232263" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.359003 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m65xp" podStartSLOduration=3.047977109 podStartE2EDuration="5.358988293s" podCreationTimestamp="2026-01-30 21:21:38 +0000 UTC" firstStartedPulling="2026-01-30 21:21:39.988349205 +0000 UTC m=+438.734171864" lastFinishedPulling="2026-01-30 21:21:42.299360389 +0000 UTC m=+441.045183048" observedRunningTime="2026-01-30 21:21:43.356130095 +0000 UTC m=+442.101952744" watchObservedRunningTime="2026-01-30 21:21:43.358988293 +0000 UTC m=+442.104810942" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410599 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410638 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410705 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410798 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.410819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.411909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.412448 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.412476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.413138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.416130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.416498 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.428081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"console-5d76f88947-6xcwf\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.506943 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:43 crc kubenswrapper[4751]: I0130 21:21:43.906544 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:21:43 crc kubenswrapper[4751]: W0130 21:21:43.915677 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac1ab634_ceee_441a_8c73_eee8464c68f6.slice/crio-90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f WatchSource:0}: Error finding container 90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f: Status 404 returned error can't find the container with id 90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.146058 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.146813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.148752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.149578 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.150051 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.222182 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/591ae6a6-bb7e-4805-a1bd-e45b7624468d-monitoring-plugin-cert\") pod \"monitoring-plugin-5d87cc6655-97t9z\" (UID: \"591ae6a6-bb7e-4805-a1bd-e45b7624468d\") " pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.280361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerStarted","Data":"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d"} Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.280406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerStarted","Data":"90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f"} Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.297600 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d76f88947-6xcwf" podStartSLOduration=1.297584189 podStartE2EDuration="1.297584189s" podCreationTimestamp="2026-01-30 21:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:21:44.297039415 +0000 UTC m=+443.042862064" watchObservedRunningTime="2026-01-30 21:21:44.297584189 +0000 UTC m=+443.043406838" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.323215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/591ae6a6-bb7e-4805-a1bd-e45b7624468d-monitoring-plugin-cert\") pod \"monitoring-plugin-5d87cc6655-97t9z\" (UID: \"591ae6a6-bb7e-4805-a1bd-e45b7624468d\") " pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.330935 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/591ae6a6-bb7e-4805-a1bd-e45b7624468d-monitoring-plugin-cert\") pod \"monitoring-plugin-5d87cc6655-97t9z\" (UID: \"591ae6a6-bb7e-4805-a1bd-e45b7624468d\") " pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.405196 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7d5557bc66-sc8vg"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.406034 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.408749 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-6lnl8" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.408768 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.408938 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.409075 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.409156 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-1phlphhk6hasm" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.409370 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.420923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d5557bc66-sc8vg"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.466806 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525574 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-client-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525644 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-metrics-server-audit-profiles\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525723 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-client-certs\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-server-tls\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525800 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f11068e1-9b5d-488b-bd21-9986af1e86f6-audit-log\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.525818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxmwl\" (UniqueName: \"kubernetes.io/projected/f11068e1-9b5d-488b-bd21-9986af1e86f6-kube-api-access-nxmwl\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635203 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-client-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635259 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-metrics-server-audit-profiles\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635344 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-client-certs\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635386 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-server-tls\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f11068e1-9b5d-488b-bd21-9986af1e86f6-audit-log\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.635424 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxmwl\" (UniqueName: \"kubernetes.io/projected/f11068e1-9b5d-488b-bd21-9986af1e86f6-kube-api-access-nxmwl\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.636184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/f11068e1-9b5d-488b-bd21-9986af1e86f6-audit-log\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.636824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-metrics-server-audit-profiles\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.637353 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11068e1-9b5d-488b-bd21-9986af1e86f6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.638621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-client-certs\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.640563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-secret-metrics-server-tls\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.646566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11068e1-9b5d-488b-bd21-9986af1e86f6-client-ca-bundle\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.657794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxmwl\" (UniqueName: \"kubernetes.io/projected/f11068e1-9b5d-488b-bd21-9986af1e86f6-kube-api-access-nxmwl\") pod \"metrics-server-7d5557bc66-sc8vg\" (UID: \"f11068e1-9b5d-488b-bd21-9986af1e86f6\") " pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.723815 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.766902 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.768762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.772194 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.772604 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-46tbn" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.776432 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.777751 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.778417 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.779124 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-1snvg1vdm8jdq" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.779374 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.779804 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.780089 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.780759 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.781557 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.803637 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.813914 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.816914 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837833 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-web-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837942 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.837982 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-config-out\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838014 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838042 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838088 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvk55\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-kube-api-access-wvk55\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838200 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838288 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.838316 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939848 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-config-out\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvk55\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-kube-api-access-wvk55\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939971 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.939987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940139 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940161 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-web-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.940196 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942179 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.942848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.944284 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.945206 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.946043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.946760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.947001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.947159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/795f9139-b7e5-4bb2-86f6-e2046f4190de-config-out\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.947247 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.948436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.948926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-web-config\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.951274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.953386 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795f9139-b7e5-4bb2-86f6-e2046f4190de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.953959 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.955121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/795f9139-b7e5-4bb2-86f6-e2046f4190de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:44 crc kubenswrapper[4751]: I0130 21:21:44.957796 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvk55\" (UniqueName: \"kubernetes.io/projected/795f9139-b7e5-4bb2-86f6-e2046f4190de-kube-api-access-wvk55\") pod \"prometheus-k8s-0\" (UID: \"795f9139-b7e5-4bb2-86f6-e2046f4190de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:45 crc kubenswrapper[4751]: I0130 21:21:45.088104 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.193539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 30 21:21:46 crc kubenswrapper[4751]: W0130 21:21:46.200188 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod795f9139_b7e5_4bb2_86f6_e2046f4190de.slice/crio-3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb WatchSource:0}: Error finding container 3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb: Status 404 returned error can't find the container with id 3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.243078 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7d5557bc66-sc8vg"] Jan 30 21:21:46 crc kubenswrapper[4751]: W0130 21:21:46.250393 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11068e1_9b5d_488b_bd21_9986af1e86f6.slice/crio-0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a WatchSource:0}: Error finding container 0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a: Status 404 returned error can't find the container with id 0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"180bef4c38217333ae8586722a0fcdec017cb64127db32c846c26a5378a6d0f1"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"e441d7db8b5f38a6897a6ae27ef65077645c8bd49534b38cc007124485b5be2a"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"6fb1f42417530492a2a64c2c063c82343873633b79375e6efb2f79588873d11f"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.295819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"1cc3daad45de2425a6be02cd8d9b19e2199fc2a230be0c3913d05ad7c63d5e81"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.296197 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z"] Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.297786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" event={"ID":"f11068e1-9b5d-488b-bd21-9986af1e86f6","Type":"ContainerStarted","Data":"0ba3919dba51339e9c8a906da887181b7ab3180a18bdfa768d6e8f0ffe73c57a"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.299548 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"3a9feadcfdb7de473d7d65ef5606a62c2f5976b66d34d50e4e9cd02ee72182bb"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.301960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"196d44f4e2f8510aef54e50978e5d7722ea622790a479570378733710ad55723"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.301997 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"2c2c080f1f4f8f31b4ed026423b9b25e1a578af77fce4d43f23d7c3cc452fa75"} Jan 30 21:21:46 crc kubenswrapper[4751]: I0130 21:21:46.302007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"8eb2e570c1ccde5624b6cb4a6d7a8e89930358be9245da9f6645eeeb16734890"} Jan 30 21:21:46 crc kubenswrapper[4751]: W0130 21:21:46.303070 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod591ae6a6_bb7e_4805_a1bd_e45b7624468d.slice/crio-8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f WatchSource:0}: Error finding container 8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f: Status 404 returned error can't find the container with id 8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.316385 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"4fbb3000b655d5b37336e698b6373c6d4773408006a1af25172bf719751eee1e"} Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.318061 4751 generic.go:334] "Generic (PLEG): container finished" podID="795f9139-b7e5-4bb2-86f6-e2046f4190de" containerID="8f796d49626603846702268c2aa0059b309f2abdbd6cee803d2aa2e5e920a30a" exitCode=0 Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.318173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerDied","Data":"8f796d49626603846702268c2aa0059b309f2abdbd6cee803d2aa2e5e920a30a"} Jan 30 21:21:47 crc kubenswrapper[4751]: I0130 21:21:47.319262 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" event={"ID":"591ae6a6-bb7e-4805-a1bd-e45b7624468d","Type":"ContainerStarted","Data":"8ecd6f1b695761cda3f8accb34c0882b3cd5f31a2c1125dea1ef07238e7e814f"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.333486 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6e08b15b-9f9e-4437-9222-25bb2f84216e","Type":"ContainerStarted","Data":"ff5a651f0e5c8521da8e6ffd4d2c5a92bf955e3e3bbcbd6bd99d107df1cefdf0"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.336769 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"ed635854263a020d72ca07b93ee7f68107cacfbe9bfab787abcab4886f1996c3"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.338159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" event={"ID":"591ae6a6-bb7e-4805-a1bd-e45b7624468d","Type":"ContainerStarted","Data":"6a36cc5818bfeabcf94b1f30154e13a928417b4f94e4c7ca837328dedd0b1e8c"} Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.338542 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.348649 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.365845 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.28656763 podStartE2EDuration="9.365823569s" podCreationTimestamp="2026-01-30 21:21:39 +0000 UTC" firstStartedPulling="2026-01-30 21:21:40.573584443 +0000 UTC m=+439.319407092" lastFinishedPulling="2026-01-30 21:21:47.652840382 +0000 UTC m=+446.398663031" observedRunningTime="2026-01-30 21:21:48.359663982 +0000 UTC m=+447.105486711" watchObservedRunningTime="2026-01-30 21:21:48.365823569 +0000 UTC m=+447.111646208" Jan 30 21:21:48 crc kubenswrapper[4751]: I0130 21:21:48.375207 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5d87cc6655-97t9z" podStartSLOduration=3.021759787 podStartE2EDuration="4.375175391s" podCreationTimestamp="2026-01-30 21:21:44 +0000 UTC" firstStartedPulling="2026-01-30 21:21:46.307009453 +0000 UTC m=+445.052832102" lastFinishedPulling="2026-01-30 21:21:47.660425057 +0000 UTC m=+446.406247706" observedRunningTime="2026-01-30 21:21:48.374234786 +0000 UTC m=+447.120057435" watchObservedRunningTime="2026-01-30 21:21:48.375175391 +0000 UTC m=+447.120998110" Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.346503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"b128cc5fa29e6669cbe27455b4929c8b1acafc8353a651fd0202279a561de062"} Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.346979 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" event={"ID":"66e2e180-25ad-48cf-90fa-cb472fc3f248","Type":"ContainerStarted","Data":"be5dfbb37f3ca1ab2aedc56792a72ff2e4690790ced3f926fa12a4d595d40556"} Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.347012 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.349463 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" event={"ID":"f11068e1-9b5d-488b-bd21-9986af1e86f6","Type":"ContainerStarted","Data":"ee12ac60f913b32e50a983263db96c6c8025db7518e26f6a69a324b683eb4324"} Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.370042 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" podStartSLOduration=3.468668578 podStartE2EDuration="8.370023096s" podCreationTimestamp="2026-01-30 21:21:41 +0000 UTC" firstStartedPulling="2026-01-30 21:21:42.749377217 +0000 UTC m=+441.495199866" lastFinishedPulling="2026-01-30 21:21:47.650731735 +0000 UTC m=+446.396554384" observedRunningTime="2026-01-30 21:21:49.367871437 +0000 UTC m=+448.113694086" watchObservedRunningTime="2026-01-30 21:21:49.370023096 +0000 UTC m=+448.115845745" Jan 30 21:21:49 crc kubenswrapper[4751]: I0130 21:21:49.389843 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" podStartSLOduration=3.237525051 podStartE2EDuration="5.38982074s" podCreationTimestamp="2026-01-30 21:21:44 +0000 UTC" firstStartedPulling="2026-01-30 21:21:46.254707961 +0000 UTC m=+445.000530610" lastFinishedPulling="2026-01-30 21:21:48.40700361 +0000 UTC m=+447.152826299" observedRunningTime="2026-01-30 21:21:49.385983887 +0000 UTC m=+448.131806536" watchObservedRunningTime="2026-01-30 21:21:49.38982074 +0000 UTC m=+448.135643389" Jan 30 21:21:51 crc kubenswrapper[4751]: I0130 21:21:51.939876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7cb485bf5d-zbqfn" Jan 30 21:21:52 crc kubenswrapper[4751]: I0130 21:21:52.384007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"f44561733f676d255afcdff20f3e736384d858863e1bd25f17a3727702a1c2f7"} Jan 30 21:21:52 crc kubenswrapper[4751]: I0130 21:21:52.384406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"6f44f7c5500f180c1a3ef9cdfcb4189ced77c0866d2a57151290b053cf09e851"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"60e5abe4626a6289065c1fdf88f00f1b7f086d6ac6e4b7e1327999d42c0c46e4"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396709 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"dacfdf568d1f441938a1074af2232864d76b6ece275c85e0f38626a50b743af5"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396729 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"b053be7a7a81ddb7ec1f6470714c02702596726fa448e3f653466119710716dd"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.396747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"795f9139-b7e5-4bb2-86f6-e2046f4190de","Type":"ContainerStarted","Data":"619b5a55d051aca854bcffbea21dde6d50663e29ea3e824a57e94d697ea35c9e"} Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.463852 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.429709476 podStartE2EDuration="9.463829474s" podCreationTimestamp="2026-01-30 21:21:44 +0000 UTC" firstStartedPulling="2026-01-30 21:21:47.320050118 +0000 UTC m=+446.065872767" lastFinishedPulling="2026-01-30 21:21:51.354170076 +0000 UTC m=+450.099992765" observedRunningTime="2026-01-30 21:21:53.443558247 +0000 UTC m=+452.189380946" watchObservedRunningTime="2026-01-30 21:21:53.463829474 +0000 UTC m=+452.209652133" Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.507894 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.511080 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:53 crc kubenswrapper[4751]: I0130 21:21:53.516231 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:54 crc kubenswrapper[4751]: I0130 21:21:54.407139 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:21:54 crc kubenswrapper[4751]: I0130 21:21:54.489734 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:21:55 crc kubenswrapper[4751]: I0130 21:21:55.088584 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:04 crc kubenswrapper[4751]: I0130 21:22:04.725124 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:04 crc kubenswrapper[4751]: I0130 21:22:04.725992 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:19 crc kubenswrapper[4751]: I0130 21:22:19.554227 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7bw65" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" containerID="cri-o://843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" gracePeriod=15 Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.091160 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bw65_07ac6020-7a19-4fd7-9daa-a7db1e3cd5df/console/0.log" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.091943 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185520 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185647 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185767 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185795 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.185847 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") pod \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\" (UID: \"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df\") " Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.186754 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca" (OuterVolumeSpecName: "service-ca") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.186926 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.187299 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config" (OuterVolumeSpecName: "console-config") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.187488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.198979 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.201033 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw" (OuterVolumeSpecName: "kube-api-access-rrwzw") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "kube-api-access-rrwzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.201093 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" (UID: "07ac6020-7a19-4fd7-9daa-a7db1e3cd5df"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287684 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287787 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287810 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287829 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287849 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287868 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.287887 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwzw\" (UniqueName: \"kubernetes.io/projected/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df-kube-api-access-rrwzw\") on node \"crc\" DevicePath \"\"" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625246 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bw65_07ac6020-7a19-4fd7-9daa-a7db1e3cd5df/console/0.log" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625374 4751 generic.go:334] "Generic (PLEG): container finished" podID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" exitCode=2 Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerDied","Data":"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68"} Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625526 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bw65" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625561 4751 scope.go:117] "RemoveContainer" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.625537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bw65" event={"ID":"07ac6020-7a19-4fd7-9daa-a7db1e3cd5df","Type":"ContainerDied","Data":"d63914e011c114b25558640a8b61cb4256ca45025b1be36724b2e0af5265302e"} Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.659957 4751 scope.go:117] "RemoveContainer" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" Jan 30 21:22:20 crc kubenswrapper[4751]: E0130 21:22:20.661646 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68\": container with ID starting with 843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68 not found: ID does not exist" containerID="843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.661747 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68"} err="failed to get container status \"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68\": rpc error: code = NotFound desc = could not find container \"843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68\": container with ID starting with 843b3765407d83795c144aa8145fd74758bb93076619ea9f3b60eea53f7c6c68 not found: ID does not exist" Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.676647 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:22:20 crc kubenswrapper[4751]: I0130 21:22:20.681285 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7bw65"] Jan 30 21:22:21 crc kubenswrapper[4751]: I0130 21:22:21.991416 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" path="/var/lib/kubelet/pods/07ac6020-7a19-4fd7-9daa-a7db1e3cd5df/volumes" Jan 30 21:22:24 crc kubenswrapper[4751]: I0130 21:22:24.734461 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:24 crc kubenswrapper[4751]: I0130 21:22:24.740868 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7d5557bc66-sc8vg" Jan 30 21:22:45 crc kubenswrapper[4751]: I0130 21:22:45.088819 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:45 crc kubenswrapper[4751]: I0130 21:22:45.152829 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:45 crc kubenswrapper[4751]: I0130 21:22:45.881591 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 30 21:22:54 crc kubenswrapper[4751]: I0130 21:22:54.127087 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:22:54 crc kubenswrapper[4751]: I0130 21:22:54.128007 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.957541 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:23:04 crc kubenswrapper[4751]: E0130 21:23:04.959308 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.959429 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.959615 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ac6020-7a19-4fd7-9daa-a7db1e3cd5df" containerName="console" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.960159 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:04 crc kubenswrapper[4751]: I0130 21:23:04.971205 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.026842 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.026899 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.026945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027066 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.027350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.128994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.129876 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.130067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.130664 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.146048 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.146064 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.159540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"console-66d88878c9-plgvh\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.290933 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.593944 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.989896 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerStarted","Data":"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18"} Jan 30 21:23:05 crc kubenswrapper[4751]: I0130 21:23:05.990266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerStarted","Data":"0e693c5eb441ca00dae4f66390c3ffc4f2ac93c82973ea2489bbd8ae4743393e"} Jan 30 21:23:06 crc kubenswrapper[4751]: I0130 21:23:06.027471 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66d88878c9-plgvh" podStartSLOduration=2.02743935 podStartE2EDuration="2.02743935s" podCreationTimestamp="2026-01-30 21:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:23:06.019037074 +0000 UTC m=+524.764859763" watchObservedRunningTime="2026-01-30 21:23:06.02743935 +0000 UTC m=+524.773262039" Jan 30 21:23:15 crc kubenswrapper[4751]: I0130 21:23:15.291854 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:15 crc kubenswrapper[4751]: I0130 21:23:15.292413 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:15 crc kubenswrapper[4751]: I0130 21:23:15.300026 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:16 crc kubenswrapper[4751]: I0130 21:23:16.070137 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:23:16 crc kubenswrapper[4751]: I0130 21:23:16.143036 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:23:22 crc kubenswrapper[4751]: I0130 21:23:22.247473 4751 scope.go:117] "RemoveContainer" containerID="660c0699f36cdfbc8888077f14b9b8efed6cc41a8b3dc7ca02dfbf3a83512f36" Jan 30 21:23:24 crc kubenswrapper[4751]: I0130 21:23:24.127188 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:23:24 crc kubenswrapper[4751]: I0130 21:23:24.127648 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.191900 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d76f88947-6xcwf" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" containerID="cri-o://952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" gracePeriod=15 Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.626048 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d76f88947-6xcwf_ac1ab634-ceee-441a-8c73-eee8464c68f6/console/0.log" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.626397 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773002 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773186 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773355 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.773417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") pod \"ac1ab634-ceee-441a-8c73-eee8464c68f6\" (UID: \"ac1ab634-ceee-441a-8c73-eee8464c68f6\") " Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774664 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config" (OuterVolumeSpecName: "console-config") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.774872 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.780142 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.780452 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks" (OuterVolumeSpecName: "kube-api-access-gm6ks") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "kube-api-access-gm6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.787347 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac1ab634-ceee-441a-8c73-eee8464c68f6" (UID: "ac1ab634-ceee-441a-8c73-eee8464c68f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875411 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875935 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875968 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.875990 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm6ks\" (UniqueName: \"kubernetes.io/projected/ac1ab634-ceee-441a-8c73-eee8464c68f6-kube-api-access-gm6ks\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.876011 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.876029 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:41 crc kubenswrapper[4751]: I0130 21:23:41.876047 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac1ab634-ceee-441a-8c73-eee8464c68f6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.293519 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d76f88947-6xcwf_ac1ab634-ceee-441a-8c73-eee8464c68f6/console/0.log" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.293977 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" exitCode=2 Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerDied","Data":"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d"} Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d76f88947-6xcwf" event={"ID":"ac1ab634-ceee-441a-8c73-eee8464c68f6","Type":"ContainerDied","Data":"90dda4e7bb2586d629d12de8b6b6da54f391bd7370318df6020c6ebd1a54b36f"} Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294081 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d76f88947-6xcwf" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.294109 4751 scope.go:117] "RemoveContainer" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.324523 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.326352 4751 scope.go:117] "RemoveContainer" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" Jan 30 21:23:42 crc kubenswrapper[4751]: E0130 21:23:42.327059 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d\": container with ID starting with 952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d not found: ID does not exist" containerID="952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.327114 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d"} err="failed to get container status \"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d\": rpc error: code = NotFound desc = could not find container \"952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d\": container with ID starting with 952f96e77f9dd45e8024b79a67aea877348b424583582d68c2d398258ba4346d not found: ID does not exist" Jan 30 21:23:42 crc kubenswrapper[4751]: I0130 21:23:42.332888 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d76f88947-6xcwf"] Jan 30 21:23:43 crc kubenswrapper[4751]: I0130 21:23:43.992917 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" path="/var/lib/kubelet/pods/ac1ab634-ceee-441a-8c73-eee8464c68f6/volumes" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.127116 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.128533 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.128625 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.129470 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.129601 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6" gracePeriod=600 Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.402140 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6" exitCode=0 Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.402217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6"} Jan 30 21:23:54 crc kubenswrapper[4751]: I0130 21:23:54.402599 4751 scope.go:117] "RemoveContainer" containerID="daa3657d48b883db14b4975f24f93c0b2c6f7eb8738d3c0267f1f4f003ba63aa" Jan 30 21:23:55 crc kubenswrapper[4751]: I0130 21:23:55.411168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266"} Jan 30 21:24:22 crc kubenswrapper[4751]: I0130 21:24:22.296833 4751 scope.go:117] "RemoveContainer" containerID="e0c09ac548892d16ce214d98d72512bd48e15448460ce8ae35e4043474ce58cc" Jan 30 21:24:22 crc kubenswrapper[4751]: I0130 21:24:22.329076 4751 scope.go:117] "RemoveContainer" containerID="10d79502f57ca29d080e9753142598555bcee310b2933e4570a1f0619498f923" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.301886 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:25:50 crc kubenswrapper[4751]: E0130 21:25:50.302877 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.302898 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.303127 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1ab634-ceee-441a-8c73-eee8464c68f6" containerName="console" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.306132 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.325526 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.344290 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.418197 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.418782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.418865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.520681 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.542783 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:50 crc kubenswrapper[4751]: I0130 21:25:50.641759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:51 crc kubenswrapper[4751]: I0130 21:25:51.112914 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:25:51 crc kubenswrapper[4751]: I0130 21:25:51.338887 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerStarted","Data":"c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74"} Jan 30 21:25:51 crc kubenswrapper[4751]: I0130 21:25:51.339243 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerStarted","Data":"0c06341dc64915b9ede2454cdb277fad81ddae6f920583071ae5766b060ac55d"} Jan 30 21:25:52 crc kubenswrapper[4751]: I0130 21:25:52.345437 4751 generic.go:334] "Generic (PLEG): container finished" podID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerID="c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74" exitCode=0 Jan 30 21:25:52 crc kubenswrapper[4751]: I0130 21:25:52.346369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74"} Jan 30 21:25:52 crc kubenswrapper[4751]: I0130 21:25:52.349082 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.126835 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.127520 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.362632 4751 generic.go:334] "Generic (PLEG): container finished" podID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerID="8d83fa7db634ce7a7858c27562ccdf062d9dea0a838bb5aacc88523290613dfc" exitCode=0 Jan 30 21:25:54 crc kubenswrapper[4751]: I0130 21:25:54.362704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"8d83fa7db634ce7a7858c27562ccdf062d9dea0a838bb5aacc88523290613dfc"} Jan 30 21:25:55 crc kubenswrapper[4751]: I0130 21:25:55.373736 4751 generic.go:334] "Generic (PLEG): container finished" podID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerID="8fca4ce58dcc1f6c42dc0ef9782db856f25df74c010a55261aa5d6ba4308f0b1" exitCode=0 Jan 30 21:25:55 crc kubenswrapper[4751]: I0130 21:25:55.373788 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"8fca4ce58dcc1f6c42dc0ef9782db856f25df74c010a55261aa5d6ba4308f0b1"} Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.644399 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.719232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") pod \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.719437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") pod \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.720643 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") pod \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\" (UID: \"dedbc66c-13e3-4312-85e6-00d215e5f2ff\") " Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.724424 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle" (OuterVolumeSpecName: "bundle") pod "dedbc66c-13e3-4312-85e6-00d215e5f2ff" (UID: "dedbc66c-13e3-4312-85e6-00d215e5f2ff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.726496 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2" (OuterVolumeSpecName: "kube-api-access-bf4w2") pod "dedbc66c-13e3-4312-85e6-00d215e5f2ff" (UID: "dedbc66c-13e3-4312-85e6-00d215e5f2ff"). InnerVolumeSpecName "kube-api-access-bf4w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.737266 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util" (OuterVolumeSpecName: "util") pod "dedbc66c-13e3-4312-85e6-00d215e5f2ff" (UID: "dedbc66c-13e3-4312-85e6-00d215e5f2ff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.823256 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.823320 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dedbc66c-13e3-4312-85e6-00d215e5f2ff-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:25:56 crc kubenswrapper[4751]: I0130 21:25:56.823381 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4w2\" (UniqueName: \"kubernetes.io/projected/dedbc66c-13e3-4312-85e6-00d215e5f2ff-kube-api-access-bf4w2\") on node \"crc\" DevicePath \"\"" Jan 30 21:25:57 crc kubenswrapper[4751]: I0130 21:25:57.389387 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" event={"ID":"dedbc66c-13e3-4312-85e6-00d215e5f2ff","Type":"ContainerDied","Data":"0c06341dc64915b9ede2454cdb277fad81ddae6f920583071ae5766b060ac55d"} Jan 30 21:25:57 crc kubenswrapper[4751]: I0130 21:25:57.389445 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c06341dc64915b9ede2454cdb277fad81ddae6f920583071ae5766b060ac55d" Jan 30 21:25:57 crc kubenswrapper[4751]: I0130 21:25:57.389476 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.639275 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n"] Jan 30 21:26:04 crc kubenswrapper[4751]: E0130 21:26:04.640116 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="extract" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640131 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="extract" Jan 30 21:26:04 crc kubenswrapper[4751]: E0130 21:26:04.640146 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="util" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640153 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="util" Jan 30 21:26:04 crc kubenswrapper[4751]: E0130 21:26:04.640164 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="pull" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640170 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="pull" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640280 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" containerName="extract" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.640811 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.643237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4wgck" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.643340 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.643386 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.653142 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.743313 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.744037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.746141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-wrgzj" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.747005 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.752015 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.752894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.793234 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.820898 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7dz\" (UniqueName: \"kubernetes.io/projected/96f3e554-fbfc-4716-b6ee-0913394521fa-kube-api-access-8b7dz\") pod \"obo-prometheus-operator-68bc856cb9-5nv4n\" (UID: \"96f3e554-fbfc-4716-b6ee-0913394521fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832574 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832604 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.832688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.933890 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934015 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7dz\" (UniqueName: \"kubernetes.io/projected/96f3e554-fbfc-4716-b6ee-0913394521fa-kube-api-access-8b7dz\") pod \"obo-prometheus-operator-68bc856cb9-5nv4n\" (UID: \"96f3e554-fbfc-4716-b6ee-0913394521fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.934281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.940553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.941759 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.946526 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lhkl2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.947369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.950383 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.950398 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9n6n6" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.950543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16999302-ac18-4e1c-b3f7-a2bf3f7605aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-vpng2\" (UID: \"16999302-ac18-4e1c-b3f7-a2bf3f7605aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.951009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0edc270-3913-41f7-9218-32549d1d3dea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-858b879-n4cw4\" (UID: \"c0edc270-3913-41f7-9218-32549d1d3dea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.963401 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lhkl2"] Jan 30 21:26:04 crc kubenswrapper[4751]: I0130 21:26:04.964889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7dz\" (UniqueName: \"kubernetes.io/projected/96f3e554-fbfc-4716-b6ee-0913394521fa-kube-api-access-8b7dz\") pod \"obo-prometheus-operator-68bc856cb9-5nv4n\" (UID: \"96f3e554-fbfc-4716-b6ee-0913394521fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.035104 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ee6b659-c8c9-4f07-a897-c69db812f880-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.035619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gs2v\" (UniqueName: \"kubernetes.io/projected/3ee6b659-c8c9-4f07-a897-c69db812f880-kube-api-access-2gs2v\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.058242 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l498d"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.059183 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.059742 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.062785 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-d86tn" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.079314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l498d"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.108193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.136976 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ee6b659-c8c9-4f07-a897-c69db812f880-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.137067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gs2v\" (UniqueName: \"kubernetes.io/projected/3ee6b659-c8c9-4f07-a897-c69db812f880-kube-api-access-2gs2v\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.145129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ee6b659-c8c9-4f07-a897-c69db812f880-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.157157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gs2v\" (UniqueName: \"kubernetes.io/projected/3ee6b659-c8c9-4f07-a897-c69db812f880-kube-api-access-2gs2v\") pod \"observability-operator-59bdc8b94-lhkl2\" (UID: \"3ee6b659-c8c9-4f07-a897-c69db812f880\") " pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.238161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7472790e-3a0e-40dd-909c-4301ba84d884-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.243097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mbs\" (UniqueName: \"kubernetes.io/projected/7472790e-3a0e-40dd-909c-4301ba84d884-kube-api-access-p7mbs\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.260396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.305817 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.344633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mbs\" (UniqueName: \"kubernetes.io/projected/7472790e-3a0e-40dd-909c-4301ba84d884-kube-api-access-p7mbs\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.344706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7472790e-3a0e-40dd-909c-4301ba84d884-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.345877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7472790e-3a0e-40dd-909c-4301ba84d884-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.369174 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mbs\" (UniqueName: \"kubernetes.io/projected/7472790e-3a0e-40dd-909c-4301ba84d884-kube-api-access-p7mbs\") pod \"perses-operator-5bf474d74f-l498d\" (UID: \"7472790e-3a0e-40dd-909c-4301ba84d884\") " pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.376629 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.507862 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n"] Jan 30 21:26:05 crc kubenswrapper[4751]: W0130 21:26:05.530863 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f3e554_fbfc_4716_b6ee_0913394521fa.slice/crio-949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944 WatchSource:0}: Error finding container 949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944: Status 404 returned error can't find the container with id 949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944 Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.536662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.644165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2"] Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.656214 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lhkl2"] Jan 30 21:26:05 crc kubenswrapper[4751]: W0130 21:26:05.666843 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16999302_ac18_4e1c_b3f7_a2bf3f7605aa.slice/crio-83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f WatchSource:0}: Error finding container 83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f: Status 404 returned error can't find the container with id 83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f Jan 30 21:26:05 crc kubenswrapper[4751]: W0130 21:26:05.669354 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ee6b659_c8c9_4f07_a897_c69db812f880.slice/crio-4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd WatchSource:0}: Error finding container 4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd: Status 404 returned error can't find the container with id 4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd Jan 30 21:26:05 crc kubenswrapper[4751]: I0130 21:26:05.677144 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l498d"] Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.468581 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" event={"ID":"96f3e554-fbfc-4716-b6ee-0913394521fa","Type":"ContainerStarted","Data":"949d902b6d67c4f5a61b1a068286961454aaf38e30921e303fac5263292ec944"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.470138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" event={"ID":"16999302-ac18-4e1c-b3f7-a2bf3f7605aa","Type":"ContainerStarted","Data":"83e3f2296f955a9be2b1a67bdbd3576ffca2eba895af0ebcd86ec1f6c1089c9f"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.475308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" event={"ID":"c0edc270-3913-41f7-9218-32549d1d3dea","Type":"ContainerStarted","Data":"23c8ac9fcac0d71da8c3a971fe96facf412acf786d010f30524c5287167be801"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.476870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l498d" event={"ID":"7472790e-3a0e-40dd-909c-4301ba84d884","Type":"ContainerStarted","Data":"b74f88f83d8c31b942ad99651a96cb612084b98ee59a4f260755b1bdf3ec022e"} Jan 30 21:26:06 crc kubenswrapper[4751]: I0130 21:26:06.479308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" event={"ID":"3ee6b659-c8c9-4f07-a897-c69db812f880","Type":"ContainerStarted","Data":"4496ba71d84aa36823bb2c8f1b3c87033040f0b962ca6ecb0d5c9d87ad7d0ddd"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.578885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" event={"ID":"3ee6b659-c8c9-4f07-a897-c69db812f880","Type":"ContainerStarted","Data":"9dff2d8751b0c8cac946f8d6e1f8f36d0b1fc633e1b11b9736ef09f658d4ab62"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.580903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.581017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" event={"ID":"96f3e554-fbfc-4716-b6ee-0913394521fa","Type":"ContainerStarted","Data":"0e29c1f50f84f784c4cd4752cb0c71c1cfc9994fe9c44094aefd37bd472a20d5"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.583660 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.588990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" event={"ID":"16999302-ac18-4e1c-b3f7-a2bf3f7605aa","Type":"ContainerStarted","Data":"ce9bf48055f9b949d19c3c2307f0647e5f4b63ed4152fdccb2220c35d3f63b84"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.592797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" event={"ID":"c0edc270-3913-41f7-9218-32549d1d3dea","Type":"ContainerStarted","Data":"a6efbbe954785342a4095a4b1c0533f1ca1cfb4f82c1a19d7986a93d67697626"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.594754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l498d" event={"ID":"7472790e-3a0e-40dd-909c-4301ba84d884","Type":"ContainerStarted","Data":"269c5af3098a454b4b3ce5dc2887ae4029e396e561c29fce7c140be2274b7fdf"} Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.594934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.602411 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-lhkl2" podStartSLOduration=2.500794385 podStartE2EDuration="12.602310916s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.673306372 +0000 UTC m=+704.419129021" lastFinishedPulling="2026-01-30 21:26:15.774822893 +0000 UTC m=+714.520645552" observedRunningTime="2026-01-30 21:26:16.598749716 +0000 UTC m=+715.344572375" watchObservedRunningTime="2026-01-30 21:26:16.602310916 +0000 UTC m=+715.348133575" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.620292 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-vpng2" podStartSLOduration=2.573603574 podStartE2EDuration="12.620273235s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.67398971 +0000 UTC m=+704.419812359" lastFinishedPulling="2026-01-30 21:26:15.720659371 +0000 UTC m=+714.466482020" observedRunningTime="2026-01-30 21:26:16.617584227 +0000 UTC m=+715.363406916" watchObservedRunningTime="2026-01-30 21:26:16.620273235 +0000 UTC m=+715.366095884" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.652375 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5nv4n" podStartSLOduration=2.44893171 podStartE2EDuration="12.652349814s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.535941475 +0000 UTC m=+704.281764124" lastFinishedPulling="2026-01-30 21:26:15.739359559 +0000 UTC m=+714.485182228" observedRunningTime="2026-01-30 21:26:16.643468028 +0000 UTC m=+715.389290707" watchObservedRunningTime="2026-01-30 21:26:16.652349814 +0000 UTC m=+715.398172503" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.664475 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-l498d" podStartSLOduration=1.589495919 podStartE2EDuration="11.664454393s" podCreationTimestamp="2026-01-30 21:26:05 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.690030907 +0000 UTC m=+704.435853556" lastFinishedPulling="2026-01-30 21:26:15.764989371 +0000 UTC m=+714.510812030" observedRunningTime="2026-01-30 21:26:16.659371623 +0000 UTC m=+715.405194272" watchObservedRunningTime="2026-01-30 21:26:16.664454393 +0000 UTC m=+715.410277062" Jan 30 21:26:16 crc kubenswrapper[4751]: I0130 21:26:16.681553 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-858b879-n4cw4" podStartSLOduration=2.508800931 podStartE2EDuration="12.681527518s" podCreationTimestamp="2026-01-30 21:26:04 +0000 UTC" firstStartedPulling="2026-01-30 21:26:05.551046497 +0000 UTC m=+704.296869146" lastFinishedPulling="2026-01-30 21:26:15.723773094 +0000 UTC m=+714.469595733" observedRunningTime="2026-01-30 21:26:16.679390745 +0000 UTC m=+715.425213404" watchObservedRunningTime="2026-01-30 21:26:16.681527518 +0000 UTC m=+715.427350197" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.313928 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314816 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" containerID="cri-o://cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314860 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" containerID="cri-o://fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314924 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" containerID="cri-o://e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314919 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314955 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" containerID="cri-o://661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.314967 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" containerID="cri-o://5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.315092 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" containerID="cri-o://a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.351950 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" containerID="cri-o://54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" gracePeriod=30 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.632539 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/2.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633514 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/1.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633557 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" exitCode=2 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633598 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerDied","Data":"83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.633662 4751 scope.go:117] "RemoveContainer" containerID="2c6ea3db26de86b678d2306adc7f90c1d03797d9dd14847d766d709276053d02" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.634108 4751 scope.go:117] "RemoveContainer" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" Jan 30 21:26:21 crc kubenswrapper[4751]: E0130 21:26:21.634511 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5sgk2_openshift-multus(bcecdc4b-6607-4e4e-a9b5-49b85c030d21)\"" pod="openshift-multus/multus-5sgk2" podUID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.635817 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovnkube-controller/3.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.637513 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-acl-logging/0.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638005 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-controller/0.log" Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638394 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" exitCode=0 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638419 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" exitCode=0 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638431 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" exitCode=0 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638440 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" exitCode=143 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638450 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" exitCode=143 Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638497 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638523 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.638535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} Jan 30 21:26:21 crc kubenswrapper[4751]: I0130 21:26:21.656753 4751 scope.go:117] "RemoveContainer" containerID="959e2d34bf4d2470d1737891bbe3d8704e887259d95ea026e0467f531587bd29" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.537888 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-acl-logging/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.538369 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-controller/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.538877 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.588973 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vnh74"] Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589220 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589235 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589246 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589252 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589268 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589274 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589281 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589287 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589297 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589304 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589313 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589335 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589349 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589356 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589363 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589368 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589376 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589381 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589388 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589395 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589407 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kubecfg-setup" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589413 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kubecfg-setup" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589420 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589426 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589520 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589534 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-acl-logging" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589540 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589546 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="nbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589554 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="northd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589562 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="sbdb" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589571 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589578 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589585 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovn-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589593 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589603 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="kube-rbac-proxy-node" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.589712 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589719 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.589819 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerName="ovnkube-controller" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.591607 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.643904 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/2.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.646558 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-acl-logging/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.646972 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8bjd_3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/ovn-controller/0.log" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647255 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" exitCode=0 Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647281 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" exitCode=0 Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647291 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" exitCode=0 Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647310 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647354 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" event={"ID":"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb","Type":"ContainerDied","Data":"58211f7a0c83df8b70ab0c94abdcc3c0824047a0aa11f216a22dea02a287a2b0"} Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647376 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.647503 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8bjd" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656907 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656950 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.656997 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657020 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657077 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657092 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657113 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657131 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657185 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657210 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657241 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657257 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657273 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657292 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657307 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657339 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657405 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") pod \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\" (UID: \"3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb\") " Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657544 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-systemd-units\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657562 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-var-lib-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657582 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-netd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657656 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-bin\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657672 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-etc-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-slash\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657702 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-node-log\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657747 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-log-socket\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-ovn\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657781 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-script-lib\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657804 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-config\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-systemd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-kubelet\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-env-overrides\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbt4\" (UniqueName: \"kubernetes.io/projected/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-kube-api-access-2qbt4\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.657906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-netns\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658069 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket" (OuterVolumeSpecName: "log-socket") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658093 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658111 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658142 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log" (OuterVolumeSpecName: "node-log") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658640 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658707 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658715 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash" (OuterVolumeSpecName: "host-slash") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658712 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.658769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.659147 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.659164 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.659300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.662979 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497" (OuterVolumeSpecName: "kube-api-access-s8497") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "kube-api-access-s8497". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.663031 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.664774 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.671470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" (UID: "3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.714887 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.729824 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.746249 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759300 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-script-lib\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-config\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-systemd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-kubelet\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-env-overrides\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759456 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qbt4\" (UniqueName: \"kubernetes.io/projected/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-kube-api-access-2qbt4\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-netns\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-systemd-units\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-var-lib-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-netd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-bin\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-etc-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-slash\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-node-log\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759743 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-log-socket\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-ovn\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759799 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759809 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759818 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759828 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759837 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759846 4751 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759854 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8497\" (UniqueName: \"kubernetes.io/projected/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-kube-api-access-s8497\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759863 4751 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759871 4751 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759881 4751 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759888 4751 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759897 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759904 4751 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759912 4751 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759920 4751 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759927 4751 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759935 4751 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759943 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759951 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759961 4751 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.759997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-ovn\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760334 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-script-lib\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760852 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760894 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-netd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-cni-bin\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760960 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-etc-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760979 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-slash\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.760997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-node-log\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.761923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovnkube-config\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.762630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-env-overrides\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.762925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-run-netns\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-systemd-units\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763087 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-var-lib-openvswitch\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-host-kubelet\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-run-systemd\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.763345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-log-socket\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.772260 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.780489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-ovn-node-metrics-cert\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.787753 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qbt4\" (UniqueName: \"kubernetes.io/projected/e06def3c-fba4-4d8b-a757-8a9691b6e8d5-kube-api-access-2qbt4\") pod \"ovnkube-node-vnh74\" (UID: \"e06def3c-fba4-4d8b-a757-8a9691b6e8d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.794238 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.815664 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.831425 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.844553 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.845045 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845075 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} err="failed to get container status \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845099 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.845576 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845606 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} err="failed to get container status \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845628 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.845878 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845921 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} err="failed to get container status \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.845949 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.846216 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846240 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} err="failed to get container status \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846256 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.846516 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846543 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} err="failed to get container status \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846557 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.846762 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846783 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} err="failed to get container status \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.846798 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.847029 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847053 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} err="failed to get container status \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847066 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.847264 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847288 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} err="failed to get container status \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847301 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: E0130 21:26:22.847714 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847739 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} err="failed to get container status \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847752 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847955 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} err="failed to get container status \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.847977 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848202 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} err="failed to get container status \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848221 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848445 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} err="failed to get container status \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848466 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848688 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} err="failed to get container status \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848710 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848923 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} err="failed to get container status \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.848942 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849139 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} err="failed to get container status \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849165 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849397 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} err="failed to get container status \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849416 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849630 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} err="failed to get container status \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849655 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849847 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} err="failed to get container status \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.849866 4751 scope.go:117] "RemoveContainer" containerID="54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850064 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982"} err="failed to get container status \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": rpc error: code = NotFound desc = could not find container \"54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982\": container with ID starting with 54f2d2a75894256b1c5caca5dba126e32962397dad79b0238a59256c96c4b982 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850085 4751 scope.go:117] "RemoveContainer" containerID="a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850283 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4"} err="failed to get container status \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": rpc error: code = NotFound desc = could not find container \"a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4\": container with ID starting with a9a2172eeab362245094bf5e28b083271e88b53fa8c3ba0d9b80c1d6be47e5d4 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850302 4751 scope.go:117] "RemoveContainer" containerID="fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850579 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568"} err="failed to get container status \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": rpc error: code = NotFound desc = could not find container \"fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568\": container with ID starting with fea531fa2960ba230a577bc1d5399c4536b20a4da5918cf9d0836188b9917568 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850601 4751 scope.go:117] "RemoveContainer" containerID="5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850807 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753"} err="failed to get container status \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": rpc error: code = NotFound desc = could not find container \"5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753\": container with ID starting with 5875a4d184dfa8f9bf9aa3b5344801e98c40b94a2937f01a56a6374c9230b753 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.850840 4751 scope.go:117] "RemoveContainer" containerID="29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851248 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37"} err="failed to get container status \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": rpc error: code = NotFound desc = could not find container \"29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37\": container with ID starting with 29cec39dc8ad3c23c53aec8356ea6c6a742f88543468a686aaf97b65a3cd9a37 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851267 4751 scope.go:117] "RemoveContainer" containerID="661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851559 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2"} err="failed to get container status \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": rpc error: code = NotFound desc = could not find container \"661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2\": container with ID starting with 661d8aa2f65d22ce40c605a691f4953d672ec97fb9ac244ccaa78eebd80754e2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851579 4751 scope.go:117] "RemoveContainer" containerID="e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851833 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06"} err="failed to get container status \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": rpc error: code = NotFound desc = could not find container \"e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06\": container with ID starting with e478d9048ebb1566b1b7725c450233c878bcf04b735fcd40527011b7d9a05c06 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.851863 4751 scope.go:117] "RemoveContainer" containerID="cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.852117 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2"} err="failed to get container status \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": rpc error: code = NotFound desc = could not find container \"cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2\": container with ID starting with cf30eae0afb02b0dca16d662993e56760511ec8718652ac2b05771c4669398c2 not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.852147 4751 scope.go:117] "RemoveContainer" containerID="4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.852391 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea"} err="failed to get container status \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": rpc error: code = NotFound desc = could not find container \"4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea\": container with ID starting with 4a324fdc8866aba0b1de9793faf9064f44d333b3b3da6bcd31f78aeef0c405ea not found: ID does not exist" Jan 30 21:26:22 crc kubenswrapper[4751]: I0130 21:26:22.905702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:22 crc kubenswrapper[4751]: W0130 21:26:22.922150 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode06def3c_fba4_4d8b_a757_8a9691b6e8d5.slice/crio-800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a WatchSource:0}: Error finding container 800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a: Status 404 returned error can't find the container with id 800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.004502 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.008475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8bjd"] Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.653813 4751 generic.go:334] "Generic (PLEG): container finished" podID="e06def3c-fba4-4d8b-a757-8a9691b6e8d5" containerID="fbf27c0c77816fe260af9371a94b5decc79e2dcd9a7ea97c73a5e1b08c2aa7a7" exitCode=0 Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.653876 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerDied","Data":"fbf27c0c77816fe260af9371a94b5decc79e2dcd9a7ea97c73a5e1b08c2aa7a7"} Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.653903 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"800fad32300ea017786587d7cde335216ff0418cedb424cb73e635531eec011a"} Jan 30 21:26:23 crc kubenswrapper[4751]: I0130 21:26:23.986176 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb" path="/var/lib/kubelet/pods/3b9eb477-4a6d-4f9c-ba41-5b79f5779ffb/volumes" Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.126930 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.127076 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.665384 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"c5bba681a0c5328060626237e12fc4a149d5fc336b368a86d5cd968ff56ea43b"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"6c9be4a6ce0c4681a390ddaaa2607239153351db36333368785f77d80128b789"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"695f321e07d13447854cd2795e0d52cc08a8014783da88bbb24e5408fb2cb5c9"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666532 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"814ce438a366d7261ae72260069f92dd6353fbcb1a6fa883000560cdb532e0c4"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"9d280fefc584670ba10b9e2a91ee82c0f4ffd9b03a6fbd4c9b8563ef418df4ca"} Jan 30 21:26:24 crc kubenswrapper[4751]: I0130 21:26:24.666799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"ad63af0b5d286a7cf0519491f737673ef96d6554f2776f21f05f6b309d7892da"} Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.380055 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-l498d" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.786636 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg"] Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.788236 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.790189 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.790433 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-n5r22" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.790631 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.816047 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mbzjn"] Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.817167 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.819864 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cvfjx" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.824211 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-shbmk"] Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.826403 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.830750 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hnk4n" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.906451 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhcm\" (UniqueName: \"kubernetes.io/projected/9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd-kube-api-access-7vhcm\") pod \"cert-manager-858654f9db-mbzjn\" (UID: \"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd\") " pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.906564 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zl5\" (UniqueName: \"kubernetes.io/projected/9acdc588-bef3-4ce2-bf06-afea86273408-kube-api-access-j7zl5\") pod \"cert-manager-webhook-687f57d79b-shbmk\" (UID: \"9acdc588-bef3-4ce2-bf06-afea86273408\") " pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:25 crc kubenswrapper[4751]: I0130 21:26:25.906606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gl2f\" (UniqueName: \"kubernetes.io/projected/04bdab63-06c1-475f-8351-a2ccc4292f25-kube-api-access-8gl2f\") pod \"cert-manager-cainjector-cf98fcc89-9k9rg\" (UID: \"04bdab63-06c1-475f-8351-a2ccc4292f25\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.007942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zl5\" (UniqueName: \"kubernetes.io/projected/9acdc588-bef3-4ce2-bf06-afea86273408-kube-api-access-j7zl5\") pod \"cert-manager-webhook-687f57d79b-shbmk\" (UID: \"9acdc588-bef3-4ce2-bf06-afea86273408\") " pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.008462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gl2f\" (UniqueName: \"kubernetes.io/projected/04bdab63-06c1-475f-8351-a2ccc4292f25-kube-api-access-8gl2f\") pod \"cert-manager-cainjector-cf98fcc89-9k9rg\" (UID: \"04bdab63-06c1-475f-8351-a2ccc4292f25\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.008709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhcm\" (UniqueName: \"kubernetes.io/projected/9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd-kube-api-access-7vhcm\") pod \"cert-manager-858654f9db-mbzjn\" (UID: \"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd\") " pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.027051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhcm\" (UniqueName: \"kubernetes.io/projected/9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd-kube-api-access-7vhcm\") pod \"cert-manager-858654f9db-mbzjn\" (UID: \"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd\") " pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.029903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zl5\" (UniqueName: \"kubernetes.io/projected/9acdc588-bef3-4ce2-bf06-afea86273408-kube-api-access-j7zl5\") pod \"cert-manager-webhook-687f57d79b-shbmk\" (UID: \"9acdc588-bef3-4ce2-bf06-afea86273408\") " pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.040806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gl2f\" (UniqueName: \"kubernetes.io/projected/04bdab63-06c1-475f-8351-a2ccc4292f25-kube-api-access-8gl2f\") pod \"cert-manager-cainjector-cf98fcc89-9k9rg\" (UID: \"04bdab63-06c1-475f-8351-a2ccc4292f25\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.107776 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133052 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133237 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133280 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.133378 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(53b5db70e375ba90c1dbd19c750df31695be1a20dfe126c79312540f12b77ba1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podUID="04bdab63-06c1-475f-8351-a2ccc4292f25" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.141387 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.149819 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.171947 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.172110 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.172236 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.172556 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(c49424396545e535821a7f23ae860b318875e546e4c95d615e56330e31cfd7e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mbzjn" podUID="9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199162 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199238 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199260 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:26 crc kubenswrapper[4751]: E0130 21:26:26.199306 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(0edc5d01a8de309214f70280fc660dbeddf385496c4b3bc832bb4edd033bf940): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podUID="9acdc588-bef3-4ce2-bf06-afea86273408" Jan 30 21:26:26 crc kubenswrapper[4751]: I0130 21:26:26.680153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"1a38d38bed78d6417058a678a7fe778414ff9bd58065edd5c13c466806c12180"} Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699342 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" event={"ID":"e06def3c-fba4-4d8b-a757-8a9691b6e8d5","Type":"ContainerStarted","Data":"33a6a45d7f4c3c534bfe517ee30a85d20b4ad905296fd2b0319391ebcdbb97d6"} Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.699849 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.739476 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" podStartSLOduration=7.739459482 podStartE2EDuration="7.739459482s" podCreationTimestamp="2026-01-30 21:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:26:29.738426135 +0000 UTC m=+728.484248784" watchObservedRunningTime="2026-01-30 21:26:29.739459482 +0000 UTC m=+728.485282131" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.743839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:29 crc kubenswrapper[4751]: I0130 21:26:29.745504 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.133998 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-shbmk"] Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.134111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.134617 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.141383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mbzjn"] Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.141518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.141891 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.179958 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.180015 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.180037 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.180079 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(e17f23062a2b0bad7a23e53b2cd47e77076be6d95d66d82ff339abbb98a8dd2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podUID="9acdc588-bef3-4ce2-bf06-afea86273408" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.193137 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg"] Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.193251 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: I0130 21:26:30.193656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215676 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215733 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215757 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.215794 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b363fe4a778d9ccbb0a5182bdeaffa84dd8adbd7ab2df81e38dde50d5681a56a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mbzjn" podUID="9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227114 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227164 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227183 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:30 crc kubenswrapper[4751]: E0130 21:26:30.227216 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(01744bd11caded8a975cd25d2509156a2d1bbd000a563a6455a6d57dcf6cdb39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podUID="04bdab63-06c1-475f-8351-a2ccc4292f25" Jan 30 21:26:35 crc kubenswrapper[4751]: I0130 21:26:35.977173 4751 scope.go:117] "RemoveContainer" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" Jan 30 21:26:35 crc kubenswrapper[4751]: E0130 21:26:35.977894 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5sgk2_openshift-multus(bcecdc4b-6607-4e4e-a9b5-49b85c030d21)\"" pod="openshift-multus/multus-5sgk2" podUID="bcecdc4b-6607-4e4e-a9b5-49b85c030d21" Jan 30 21:26:42 crc kubenswrapper[4751]: I0130 21:26:42.975653 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:42 crc kubenswrapper[4751]: I0130 21:26:42.977010 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025254 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025390 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025438 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:43 crc kubenswrapper[4751]: E0130 21:26:43.025526 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager(04bdab63-06c1-475f-8351-a2ccc4292f25)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-9k9rg_cert-manager_04bdab63-06c1-475f-8351-a2ccc4292f25_0(9b70e517ec2a2f9c93da474f981befc1b536efa81857bbc2951ad86d35642787): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podUID="04bdab63-06c1-475f-8351-a2ccc4292f25" Jan 30 21:26:44 crc kubenswrapper[4751]: I0130 21:26:44.976080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:44 crc kubenswrapper[4751]: I0130 21:26:44.977485 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031422 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031501 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031531 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:45 crc kubenswrapper[4751]: E0130 21:26:45.031596 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mbzjn_cert-manager(9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mbzjn_cert-manager_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd_0(b1e89963d3fb6138e941477ad0a264ecb1d59e35d629a712325c0c46c14b8089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mbzjn" podUID="9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd" Jan 30 21:26:45 crc kubenswrapper[4751]: I0130 21:26:45.975191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:45 crc kubenswrapper[4751]: I0130 21:26:45.976922 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004156 4751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004241 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004269 4751 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:26:46 crc kubenswrapper[4751]: E0130 21:26:46.004352 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-shbmk_cert-manager(9acdc588-bef3-4ce2-bf06-afea86273408)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-shbmk_cert-manager_9acdc588-bef3-4ce2-bf06-afea86273408_0(ec1aa5dab71b39de1131e6bf70b88d1aad3b1f626549e465574d1890e6fda8cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podUID="9acdc588-bef3-4ce2-bf06-afea86273408" Jan 30 21:26:49 crc kubenswrapper[4751]: I0130 21:26:49.976666 4751 scope.go:117] "RemoveContainer" containerID="83b2f589d316b2b21ef50ee0174ac43309d977d8244dba740216ca2dd67db344" Jan 30 21:26:50 crc kubenswrapper[4751]: I0130 21:26:50.885811 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5sgk2_bcecdc4b-6607-4e4e-a9b5-49b85c030d21/kube-multus/2.log" Jan 30 21:26:50 crc kubenswrapper[4751]: I0130 21:26:50.886227 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5sgk2" event={"ID":"bcecdc4b-6607-4e4e-a9b5-49b85c030d21","Type":"ContainerStarted","Data":"80504ec7cb514f9b12d6512d6b92672e51e1f2ac85e30724b02b38f23ef119fc"} Jan 30 21:26:52 crc kubenswrapper[4751]: I0130 21:26:52.945127 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vnh74" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.127640 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.127734 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.127800 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.128842 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.128950 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266" gracePeriod=600 Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.921731 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266" exitCode=0 Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.921811 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266"} Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.922603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b"} Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.922649 4751 scope.go:117] "RemoveContainer" containerID="a1b797d24a7a7f0cfe28e0e7b1326aa242a6fa28ef5d30064b33f02362b2f1a6" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.974939 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:54 crc kubenswrapper[4751]: I0130 21:26:54.975464 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" Jan 30 21:26:55 crc kubenswrapper[4751]: I0130 21:26:55.186539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg"] Jan 30 21:26:55 crc kubenswrapper[4751]: I0130 21:26:55.938715 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" event={"ID":"04bdab63-06c1-475f-8351-a2ccc4292f25","Type":"ContainerStarted","Data":"5c7fa1cebd0b137f6751c7c86400d9c4f1245c22e347323173cd02eadad09b36"} Jan 30 21:26:57 crc kubenswrapper[4751]: I0130 21:26:57.975498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:57 crc kubenswrapper[4751]: I0130 21:26:57.976890 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbzjn" Jan 30 21:26:59 crc kubenswrapper[4751]: I0130 21:26:59.530693 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mbzjn"] Jan 30 21:26:59 crc kubenswrapper[4751]: I0130 21:26:59.985629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" event={"ID":"04bdab63-06c1-475f-8351-a2ccc4292f25","Type":"ContainerStarted","Data":"6bed7bff594206b7e4db2756f55a1b17b2aaa993f1053a907853e796a24db6ab"} Jan 30 21:26:59 crc kubenswrapper[4751]: I0130 21:26:59.985691 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbzjn" event={"ID":"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd","Type":"ContainerStarted","Data":"c2e9d2ca4e9b4bb1fc7f0e0016fa27b5c8f80c3b6842d7d4adf02f13ff4eaa58"} Jan 30 21:27:00 crc kubenswrapper[4751]: I0130 21:27:00.004637 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9k9rg" podStartSLOduration=30.724687067 podStartE2EDuration="35.004616104s" podCreationTimestamp="2026-01-30 21:26:25 +0000 UTC" firstStartedPulling="2026-01-30 21:26:55.195842936 +0000 UTC m=+753.941665585" lastFinishedPulling="2026-01-30 21:26:59.475771973 +0000 UTC m=+758.221594622" observedRunningTime="2026-01-30 21:27:00.003078585 +0000 UTC m=+758.748901264" watchObservedRunningTime="2026-01-30 21:27:00.004616104 +0000 UTC m=+758.750438753" Jan 30 21:27:00 crc kubenswrapper[4751]: I0130 21:27:00.975138 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:00 crc kubenswrapper[4751]: I0130 21:27:00.976083 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:02 crc kubenswrapper[4751]: I0130 21:27:02.155795 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-shbmk"] Jan 30 21:27:04 crc kubenswrapper[4751]: I0130 21:27:04.017928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbzjn" event={"ID":"9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd","Type":"ContainerStarted","Data":"c33e1277dcb8bda6ae26f572105920ba9cadb13a6b4ea30cd630c2dd58f0660e"} Jan 30 21:27:04 crc kubenswrapper[4751]: I0130 21:27:04.021907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" event={"ID":"9acdc588-bef3-4ce2-bf06-afea86273408","Type":"ContainerStarted","Data":"9ab313adebe7e8122c4ff6b9b65ef4789b9ff7f0d4e13f0f915ab23f26a63b82"} Jan 30 21:27:04 crc kubenswrapper[4751]: I0130 21:27:04.047055 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mbzjn" podStartSLOduration=35.489541852 podStartE2EDuration="39.046998698s" podCreationTimestamp="2026-01-30 21:26:25 +0000 UTC" firstStartedPulling="2026-01-30 21:26:59.539808518 +0000 UTC m=+758.285631177" lastFinishedPulling="2026-01-30 21:27:03.097265374 +0000 UTC m=+761.843088023" observedRunningTime="2026-01-30 21:27:04.036464879 +0000 UTC m=+762.782287528" watchObservedRunningTime="2026-01-30 21:27:04.046998698 +0000 UTC m=+762.792821357" Jan 30 21:27:05 crc kubenswrapper[4751]: I0130 21:27:05.036642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" event={"ID":"9acdc588-bef3-4ce2-bf06-afea86273408","Type":"ContainerStarted","Data":"d5025a00c70a46078b1ad4d454005b1328143ea48bdeaf40fa775eeaf2c0ab3e"} Jan 30 21:27:05 crc kubenswrapper[4751]: I0130 21:27:05.069240 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" podStartSLOduration=38.541172156 podStartE2EDuration="40.069206053s" podCreationTimestamp="2026-01-30 21:26:25 +0000 UTC" firstStartedPulling="2026-01-30 21:27:03.081632485 +0000 UTC m=+761.827455164" lastFinishedPulling="2026-01-30 21:27:04.609666412 +0000 UTC m=+763.355489061" observedRunningTime="2026-01-30 21:27:05.057647499 +0000 UTC m=+763.803470188" watchObservedRunningTime="2026-01-30 21:27:05.069206053 +0000 UTC m=+763.815028742" Jan 30 21:27:05 crc kubenswrapper[4751]: I0130 21:27:05.606546 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 21:27:06 crc kubenswrapper[4751]: I0130 21:27:06.044300 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:11 crc kubenswrapper[4751]: I0130 21:27:11.154281 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-shbmk" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.301434 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.304467 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.307680 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.332178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.403731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.404098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.404216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.481062 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.482982 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.491813 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.505613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.505719 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.505763 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.506205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.506256 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.548428 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.607499 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.607895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.607988 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.681287 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709297 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.709951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.727970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:37 crc kubenswrapper[4751]: I0130 21:27:37.799978 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.097704 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.236837 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:27:38 crc kubenswrapper[4751]: W0130 21:27:38.237718 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbca202_59f0_4772_a82c_8c448cbc4c70.slice/crio-00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938 WatchSource:0}: Error finding container 00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938: Status 404 returned error can't find the container with id 00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938 Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.391841 4751 generic.go:334] "Generic (PLEG): container finished" podID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerID="347f9ed747e483e16fb6ae1c645ea8f9e1e241d75612df7496d92124e040f3b2" exitCode=0 Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.392335 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"347f9ed747e483e16fb6ae1c645ea8f9e1e241d75612df7496d92124e040f3b2"} Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.392378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerStarted","Data":"afce932094578d4622ffbd64cb7aa71b797cb261dfa58b636da404a5ddeda537"} Jan 30 21:27:38 crc kubenswrapper[4751]: I0130 21:27:38.393201 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerStarted","Data":"00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938"} Jan 30 21:27:39 crc kubenswrapper[4751]: I0130 21:27:39.405728 4751 generic.go:334] "Generic (PLEG): container finished" podID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerID="abe94b742d94eb174247c27e3a3c038f4045e5dcdb784f1f247f494e3ae1f48a" exitCode=0 Jan 30 21:27:39 crc kubenswrapper[4751]: I0130 21:27:39.405785 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"abe94b742d94eb174247c27e3a3c038f4045e5dcdb784f1f247f494e3ae1f48a"} Jan 30 21:27:40 crc kubenswrapper[4751]: I0130 21:27:40.418499 4751 generic.go:334] "Generic (PLEG): container finished" podID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerID="00077efd881cb27326f6e85b8f3f194fe2c51b7a53178340a6cd81dc7d4c6583" exitCode=0 Jan 30 21:27:40 crc kubenswrapper[4751]: I0130 21:27:40.418692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"00077efd881cb27326f6e85b8f3f194fe2c51b7a53178340a6cd81dc7d4c6583"} Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.041691 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.044068 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.065468 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.100692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.100770 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.100816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.201973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202220 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202697 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.202768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.235254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"redhat-operators-cwl6v\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.397642 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.429316 4751 generic.go:334] "Generic (PLEG): container finished" podID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerID="a1a33ec969e1c6d383f8048ab15fcd257712831d15c58fa7001702dde20fdac5" exitCode=0 Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.429392 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"a1a33ec969e1c6d383f8048ab15fcd257712831d15c58fa7001702dde20fdac5"} Jan 30 21:27:41 crc kubenswrapper[4751]: I0130 21:27:41.833826 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:27:41 crc kubenswrapper[4751]: W0130 21:27:41.840271 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaed619dc_ef21_4b05_ad8b_1fe65d151661.slice/crio-ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc WatchSource:0}: Error finding container ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc: Status 404 returned error can't find the container with id ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.438309 4751 generic.go:334] "Generic (PLEG): container finished" podID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerID="4bac6aed72495d5a47025b1229e37fd0256684ee83fbbdb6b3d50f1e0a5fc0c5" exitCode=0 Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.438404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"4bac6aed72495d5a47025b1229e37fd0256684ee83fbbdb6b3d50f1e0a5fc0c5"} Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.440295 4751 generic.go:334] "Generic (PLEG): container finished" podID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerID="ffd1a65f8a7c27c7f8621cd0bbe5505acca22aece232cb5556eb72c3d0444078" exitCode=0 Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.440348 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"ffd1a65f8a7c27c7f8621cd0bbe5505acca22aece232cb5556eb72c3d0444078"} Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.440397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerStarted","Data":"ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc"} Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.688800 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.741591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") pod \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.741703 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") pod \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.741923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") pod \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\" (UID: \"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e\") " Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.752197 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg" (OuterVolumeSpecName: "kube-api-access-jmbpg") pod "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" (UID: "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e"). InnerVolumeSpecName "kube-api-access-jmbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.752428 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle" (OuterVolumeSpecName: "bundle") pod "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" (UID: "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.776018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util" (OuterVolumeSpecName: "util") pod "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" (UID: "de2d9dc5-eee5-4e7f-86dd-9b7eb581429e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.843584 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.843787 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:42 crc kubenswrapper[4751]: I0130 21:27:42.843797 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmbpg\" (UniqueName: \"kubernetes.io/projected/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e-kube-api-access-jmbpg\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.447951 4751 generic.go:334] "Generic (PLEG): container finished" podID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerID="cde0bba9b5bd705e79427c82f01801f8fb8f078a030c2d0c0c73c34abe57027a" exitCode=0 Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.448019 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"cde0bba9b5bd705e79427c82f01801f8fb8f078a030c2d0c0c73c34abe57027a"} Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.449463 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerStarted","Data":"66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658"} Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.451526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" event={"ID":"de2d9dc5-eee5-4e7f-86dd-9b7eb581429e","Type":"ContainerDied","Data":"afce932094578d4622ffbd64cb7aa71b797cb261dfa58b636da404a5ddeda537"} Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.451568 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afce932094578d4622ffbd64cb7aa71b797cb261dfa58b636da404a5ddeda537" Jan 30 21:27:43 crc kubenswrapper[4751]: I0130 21:27:43.451570 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.460174 4751 generic.go:334] "Generic (PLEG): container finished" podID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerID="66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658" exitCode=0 Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.460259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658"} Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.714228 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.774025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") pod \"1cbca202-59f0-4772-a82c-8c448cbc4c70\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.774112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") pod \"1cbca202-59f0-4772-a82c-8c448cbc4c70\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.774299 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") pod \"1cbca202-59f0-4772-a82c-8c448cbc4c70\" (UID: \"1cbca202-59f0-4772-a82c-8c448cbc4c70\") " Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.776681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle" (OuterVolumeSpecName: "bundle") pod "1cbca202-59f0-4772-a82c-8c448cbc4c70" (UID: "1cbca202-59f0-4772-a82c-8c448cbc4c70"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.784403 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g" (OuterVolumeSpecName: "kube-api-access-kgl6g") pod "1cbca202-59f0-4772-a82c-8c448cbc4c70" (UID: "1cbca202-59f0-4772-a82c-8c448cbc4c70"). InnerVolumeSpecName "kube-api-access-kgl6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.785682 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util" (OuterVolumeSpecName: "util") pod "1cbca202-59f0-4772-a82c-8c448cbc4c70" (UID: "1cbca202-59f0-4772-a82c-8c448cbc4c70"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.876450 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.876507 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbca202-59f0-4772-a82c-8c448cbc4c70-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:44 crc kubenswrapper[4751]: I0130 21:27:44.876518 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgl6g\" (UniqueName: \"kubernetes.io/projected/1cbca202-59f0-4772-a82c-8c448cbc4c70-kube-api-access-kgl6g\") on node \"crc\" DevicePath \"\"" Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.473766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerStarted","Data":"c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b"} Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.478153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" event={"ID":"1cbca202-59f0-4772-a82c-8c448cbc4c70","Type":"ContainerDied","Data":"00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938"} Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.478203 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f1560fa0698e5c3dfd7ffc5943ed55e1b54b817660be0d0ca6c1e47446f938" Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.478278 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg" Jan 30 21:27:45 crc kubenswrapper[4751]: I0130 21:27:45.506612 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwl6v" podStartSLOduration=2.043978496 podStartE2EDuration="4.506592811s" podCreationTimestamp="2026-01-30 21:27:41 +0000 UTC" firstStartedPulling="2026-01-30 21:27:42.44123011 +0000 UTC m=+801.187052759" lastFinishedPulling="2026-01-30 21:27:44.903844425 +0000 UTC m=+803.649667074" observedRunningTime="2026-01-30 21:27:45.504592078 +0000 UTC m=+804.250414737" watchObservedRunningTime="2026-01-30 21:27:45.506592811 +0000 UTC m=+804.252415460" Jan 30 21:27:51 crc kubenswrapper[4751]: I0130 21:27:51.398528 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:51 crc kubenswrapper[4751]: I0130 21:27:51.399018 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.470586 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwl6v" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" probeResult="failure" output=< Jan 30 21:27:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:27:52 crc kubenswrapper[4751]: > Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566512 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h"] Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566789 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566803 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566815 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566821 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566829 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566834 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566844 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566850 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566857 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566862 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="pull" Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.566872 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.566877 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="util" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.567009 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.567021 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" containerName="extract" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.567750 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571023 4751 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "loki-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571074 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571170 4751 reflector.go:561] object-"openshift-operators-redhat"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571188 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571294 4751 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-manager-config": failed to list *v1.ConfigMap: configmaps "loki-operator-manager-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571312 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-manager-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"loki-operator-manager-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.571502 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 30 21:27:52 crc kubenswrapper[4751]: W0130 21:27:52.571668 4751 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-4v65n": failed to list *v1.Secret: secrets "loki-operator-controller-manager-dockercfg-4v65n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Jan 30 21:27:52 crc kubenswrapper[4751]: E0130 21:27:52.571688 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-controller-manager-dockercfg-4v65n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-controller-manager-dockercfg-4v65n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.572296 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.601560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h"] Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.735912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-apiservice-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.735985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-webhook-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.736207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d32a4de7-a9b5-408d-b678-bcc0244cceee-manager-config\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.736254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsjzj\" (UniqueName: \"kubernetes.io/projected/d32a4de7-a9b5-408d-b678-bcc0244cceee-kube-api-access-lsjzj\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.736364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.837949 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-webhook-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838051 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsjzj\" (UniqueName: \"kubernetes.io/projected/d32a4de7-a9b5-408d-b678-bcc0244cceee-kube-api-access-lsjzj\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838075 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d32a4de7-a9b5-408d-b678-bcc0244cceee-manager-config\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.838164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-apiservice-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:52 crc kubenswrapper[4751]: I0130 21:27:52.844063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.671902 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.686006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsjzj\" (UniqueName: \"kubernetes.io/projected/d32a4de7-a9b5-408d-b678-bcc0244cceee-kube-api-access-lsjzj\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.747502 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.748346 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-4v65n" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.755031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-apiservice-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.768962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d32a4de7-a9b5-408d-b678-bcc0244cceee-webhook-cert\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.778047 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.779769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d32a4de7-a9b5-408d-b678-bcc0244cceee-manager-config\") pod \"loki-operator-controller-manager-7988bf4897-spq9h\" (UID: \"d32a4de7-a9b5-408d-b678-bcc0244cceee\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:53 crc kubenswrapper[4751]: I0130 21:27:53.794036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:27:54 crc kubenswrapper[4751]: I0130 21:27:54.139895 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h"] Jan 30 21:27:54 crc kubenswrapper[4751]: I0130 21:27:54.540616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" event={"ID":"d32a4de7-a9b5-408d-b678-bcc0244cceee","Type":"ContainerStarted","Data":"7a3f6ad980d5cd5f632a1cbac28d9b569f5edb511576da6618a11519b6fa03b3"} Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.861581 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2"] Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.862773 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.867693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-vmc5w" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.867743 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.867807 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 30 21:27:56 crc kubenswrapper[4751]: I0130 21:27:56.870124 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2"] Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.003038 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msml8\" (UniqueName: \"kubernetes.io/projected/c60111a8-d193-4bbb-af4b-a5f286a4b04b-kube-api-access-msml8\") pod \"cluster-logging-operator-79cf69ddc8-tg4r2\" (UID: \"c60111a8-d193-4bbb-af4b-a5f286a4b04b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.104166 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msml8\" (UniqueName: \"kubernetes.io/projected/c60111a8-d193-4bbb-af4b-a5f286a4b04b-kube-api-access-msml8\") pod \"cluster-logging-operator-79cf69ddc8-tg4r2\" (UID: \"c60111a8-d193-4bbb-af4b-a5f286a4b04b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.123265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msml8\" (UniqueName: \"kubernetes.io/projected/c60111a8-d193-4bbb-af4b-a5f286a4b04b-kube-api-access-msml8\") pod \"cluster-logging-operator-79cf69ddc8-tg4r2\" (UID: \"c60111a8-d193-4bbb-af4b-a5f286a4b04b\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.184259 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" Jan 30 21:27:57 crc kubenswrapper[4751]: I0130 21:27:57.646154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2"] Jan 30 21:27:58 crc kubenswrapper[4751]: I0130 21:27:58.570442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" event={"ID":"c60111a8-d193-4bbb-af4b-a5f286a4b04b","Type":"ContainerStarted","Data":"3548e818bc9718757d736b9510d032b65fa1385801e974033185f0e5afed0bdd"} Jan 30 21:28:00 crc kubenswrapper[4751]: I0130 21:28:00.594223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" event={"ID":"d32a4de7-a9b5-408d-b678-bcc0244cceee","Type":"ContainerStarted","Data":"df93b784a76efee2e8ad28b15daa3ff6b5bfbd86e80b881c2b7ca52f493660dc"} Jan 30 21:28:01 crc kubenswrapper[4751]: I0130 21:28:01.478623 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:01 crc kubenswrapper[4751]: I0130 21:28:01.535937 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.430233 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.431216 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwl6v" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" containerID="cri-o://c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b" gracePeriod=2 Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.627891 4751 generic.go:334] "Generic (PLEG): container finished" podID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerID="c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b" exitCode=0 Jan 30 21:28:04 crc kubenswrapper[4751]: I0130 21:28:04.627938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b"} Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.612651 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.661751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwl6v" event={"ID":"aed619dc-ef21-4b05-ad8b-1fe65d151661","Type":"ContainerDied","Data":"ad819a6dc3fa066f060d584767d4a31a45ddc97ba5842a4292bc644253801abc"} Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.661864 4751 scope.go:117] "RemoveContainer" containerID="c6ce5ed14895f5b93da23984b65fd10bfc6edb5544dcd49c545fcfd5fdc7a36b" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.661962 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwl6v" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.672942 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") pod \"aed619dc-ef21-4b05-ad8b-1fe65d151661\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.672983 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") pod \"aed619dc-ef21-4b05-ad8b-1fe65d151661\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.673020 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") pod \"aed619dc-ef21-4b05-ad8b-1fe65d151661\" (UID: \"aed619dc-ef21-4b05-ad8b-1fe65d151661\") " Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.673703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities" (OuterVolumeSpecName: "utilities") pod "aed619dc-ef21-4b05-ad8b-1fe65d151661" (UID: "aed619dc-ef21-4b05-ad8b-1fe65d151661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.679173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts" (OuterVolumeSpecName: "kube-api-access-qd9ts") pod "aed619dc-ef21-4b05-ad8b-1fe65d151661" (UID: "aed619dc-ef21-4b05-ad8b-1fe65d151661"). InnerVolumeSpecName "kube-api-access-qd9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.714927 4751 scope.go:117] "RemoveContainer" containerID="66d25b52a89b8f927360bf7fff46b9e8c776909bb76d8cf550ff83e5244a7658" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.741715 4751 scope.go:117] "RemoveContainer" containerID="ffd1a65f8a7c27c7f8621cd0bbe5505acca22aece232cb5556eb72c3d0444078" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.774939 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.774977 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9ts\" (UniqueName: \"kubernetes.io/projected/aed619dc-ef21-4b05-ad8b-1fe65d151661-kube-api-access-qd9ts\") on node \"crc\" DevicePath \"\"" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.826523 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aed619dc-ef21-4b05-ad8b-1fe65d151661" (UID: "aed619dc-ef21-4b05-ad8b-1fe65d151661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.876017 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aed619dc-ef21-4b05-ad8b-1fe65d151661-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.987192 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:28:07 crc kubenswrapper[4751]: I0130 21:28:07.992424 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwl6v"] Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.672205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" event={"ID":"c60111a8-d193-4bbb-af4b-a5f286a4b04b","Type":"ContainerStarted","Data":"8207320acaa5760c14d4e65eab12252def8c72fb4e30567f5ca0b21356489b91"} Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.676173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" event={"ID":"d32a4de7-a9b5-408d-b678-bcc0244cceee","Type":"ContainerStarted","Data":"d27fe29ee58b484cf73e7c5963ecb88e74717eb36fc52410ad47070f6dea6475"} Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.677064 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.681071 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.749269 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7988bf4897-spq9h" podStartSLOduration=3.245714837 podStartE2EDuration="16.749242003s" podCreationTimestamp="2026-01-30 21:27:52 +0000 UTC" firstStartedPulling="2026-01-30 21:27:54.156342387 +0000 UTC m=+812.902165036" lastFinishedPulling="2026-01-30 21:28:07.659869553 +0000 UTC m=+826.405692202" observedRunningTime="2026-01-30 21:28:08.743699235 +0000 UTC m=+827.489521884" watchObservedRunningTime="2026-01-30 21:28:08.749242003 +0000 UTC m=+827.495064692" Jan 30 21:28:08 crc kubenswrapper[4751]: I0130 21:28:08.750133 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-tg4r2" podStartSLOduration=2.845247271 podStartE2EDuration="12.750125846s" podCreationTimestamp="2026-01-30 21:27:56 +0000 UTC" firstStartedPulling="2026-01-30 21:27:57.654638326 +0000 UTC m=+816.400460975" lastFinishedPulling="2026-01-30 21:28:07.559516901 +0000 UTC m=+826.305339550" observedRunningTime="2026-01-30 21:28:08.699852842 +0000 UTC m=+827.445675491" watchObservedRunningTime="2026-01-30 21:28:08.750125846 +0000 UTC m=+827.495948515" Jan 30 21:28:09 crc kubenswrapper[4751]: I0130 21:28:09.983012 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" path="/var/lib/kubelet/pods/aed619dc-ef21-4b05-ad8b-1fe65d151661/volumes" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.218647 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 30 21:28:13 crc kubenswrapper[4751]: E0130 21:28:13.220048 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-content" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220080 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-content" Jan 30 21:28:13 crc kubenswrapper[4751]: E0130 21:28:13.220109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-utilities" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220128 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="extract-utilities" Jan 30 21:28:13 crc kubenswrapper[4751]: E0130 21:28:13.220169 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220186 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.220495 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed619dc-ef21-4b05-ad8b-1fe65d151661" containerName="registry-server" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.221471 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.225514 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.227172 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.231320 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.258769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.258868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h4w\" (UniqueName: \"kubernetes.io/projected/84e1b505-9173-4068-a585-830aa617354d-kube-api-access-b6h4w\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.360502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h4w\" (UniqueName: \"kubernetes.io/projected/84e1b505-9173-4068-a585-830aa617354d-kube-api-access-b6h4w\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.360639 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.367867 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.367901 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f395a8c08ded556242a2d037122118d8262ff0d7d91a036d1a811634d4c5f87/globalmount\"" pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.387484 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h4w\" (UniqueName: \"kubernetes.io/projected/84e1b505-9173-4068-a585-830aa617354d-kube-api-access-b6h4w\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.408793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d3d2be3-99fb-4325-9215-bf9f53e6bb15\") pod \"minio\" (UID: \"84e1b505-9173-4068-a585-830aa617354d\") " pod="minio-dev/minio" Jan 30 21:28:13 crc kubenswrapper[4751]: I0130 21:28:13.549732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 30 21:28:14 crc kubenswrapper[4751]: I0130 21:28:14.024137 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 30 21:28:14 crc kubenswrapper[4751]: W0130 21:28:14.028077 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84e1b505_9173_4068_a585_830aa617354d.slice/crio-0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109 WatchSource:0}: Error finding container 0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109: Status 404 returned error can't find the container with id 0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109 Jan 30 21:28:14 crc kubenswrapper[4751]: I0130 21:28:14.740417 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"84e1b505-9173-4068-a585-830aa617354d","Type":"ContainerStarted","Data":"0991fae8c447b2cb8b06df8b33b5ae286ac35ee958656bac86239815181ed109"} Jan 30 21:28:17 crc kubenswrapper[4751]: I0130 21:28:17.764514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"84e1b505-9173-4068-a585-830aa617354d","Type":"ContainerStarted","Data":"6e986e869fc40809622c325f22953fba262471727cfcc8ccb5d9b118692c29a5"} Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.628778 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=9.572444439 podStartE2EDuration="12.628761068s" podCreationTimestamp="2026-01-30 21:28:11 +0000 UTC" firstStartedPulling="2026-01-30 21:28:14.031264108 +0000 UTC m=+832.777086767" lastFinishedPulling="2026-01-30 21:28:17.087580737 +0000 UTC m=+835.833403396" observedRunningTime="2026-01-30 21:28:17.783827323 +0000 UTC m=+836.529649982" watchObservedRunningTime="2026-01-30 21:28:23.628761068 +0000 UTC m=+842.374583727" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.633714 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.634589 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.640940 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.641197 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-x85pc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.641467 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.641630 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.648115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.659186 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732586 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-config\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732661 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732762 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pshk\" (UniqueName: \"kubernetes.io/projected/d066c155-02e0-448e-9d4c-f578a36e553b-kube-api-access-6pshk\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.732855 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.797144 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-gbf6p"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.798014 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.800159 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.800465 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.800637 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.819793 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-gbf6p"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9r9\" (UniqueName: \"kubernetes.io/projected/096a86f8-72dc-4bd5-a2b4-48b67a26d792-kube-api-access-zh9r9\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839469 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-config\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-config\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839594 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-s3\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839648 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.839735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pshk\" (UniqueName: \"kubernetes.io/projected/d066c155-02e0-448e-9d4c-f578a36e553b-kube-api-access-6pshk\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.841283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.842112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066c155-02e0-448e-9d4c-f578a36e553b-config\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.848819 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.862134 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d066c155-02e0-448e-9d4c-f578a36e553b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.869673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pshk\" (UniqueName: \"kubernetes.io/projected/d066c155-02e0-448e-9d4c-f578a36e553b-kube-api-access-6pshk\") pod \"logging-loki-distributor-5f678c8dd6-mc9wc\" (UID: \"d066c155-02e0-448e-9d4c-f578a36e553b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.908648 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.909551 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.912037 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.912239 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.918238 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9"] Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.940990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941056 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9r9\" (UniqueName: \"kubernetes.io/projected/096a86f8-72dc-4bd5-a2b4-48b67a26d792-kube-api-access-zh9r9\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-config\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-s3\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.941190 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.942838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-config\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.945282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-s3\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.945361 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.952127 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.962645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9r9\" (UniqueName: \"kubernetes.io/projected/096a86f8-72dc-4bd5-a2b4-48b67a26d792-kube-api-access-zh9r9\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.962652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:23 crc kubenswrapper[4751]: I0130 21:28:23.963132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/096a86f8-72dc-4bd5-a2b4-48b67a26d792-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-gbf6p\" (UID: \"096a86f8-72dc-4bd5-a2b4-48b67a26d792\") " pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh684\" (UniqueName: \"kubernetes.io/projected/8083b036-5700-420a-ad3f-1e471813194e-kube-api-access-rh684\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-config\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.043219 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.060403 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.061516 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065183 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065384 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065551 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065697 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-zkphp" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065792 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.065895 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.073195 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.075051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.093858 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.106559 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.114705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.146664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149399 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149471 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh684\" (UniqueName: \"kubernetes.io/projected/8083b036-5700-420a-ad3f-1e471813194e-kube-api-access-rh684\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-config\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.149585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.150940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083b036-5700-420a-ad3f-1e471813194e-config\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.164640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.165644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh684\" (UniqueName: \"kubernetes.io/projected/8083b036-5700-420a-ad3f-1e471813194e-kube-api-access-rh684\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.169436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8083b036-5700-420a-ad3f-1e471813194e-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-7cdp9\" (UID: \"8083b036-5700-420a-ad3f-1e471813194e\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.238680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254065 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254110 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bqv\" (UniqueName: \"kubernetes.io/projected/326140a4-6f2a-48c1-b5a2-0b02ce345c50-kube-api-access-v7bqv\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7q4\" (UniqueName: \"kubernetes.io/projected/653268f5-1827-4109-a68b-3cc7670e65f8-kube-api-access-cs7q4\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254150 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tenants\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254363 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254379 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254398 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tenants\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-rbac\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-rbac\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254488 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.254527 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bqv\" (UniqueName: \"kubernetes.io/projected/326140a4-6f2a-48c1-b5a2-0b02ce345c50-kube-api-access-v7bqv\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356155 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356171 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs7q4\" (UniqueName: \"kubernetes.io/projected/653268f5-1827-4109-a68b-3cc7670e65f8-kube-api-access-cs7q4\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tenants\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356203 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tenants\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-rbac\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356382 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-rbac\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356396 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.356413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.357710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.363383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.363736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.364045 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.364722 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.365388 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-rbac\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.369024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.369682 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.372815 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-rbac\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.373374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326140a4-6f2a-48c1-b5a2-0b02ce345c50-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.379046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tls-secret\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.385575 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/653268f5-1827-4109-a68b-3cc7670e65f8-lokistack-gateway\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.386086 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/326140a4-6f2a-48c1-b5a2-0b02ce345c50-tenants\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.392891 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/653268f5-1827-4109-a68b-3cc7670e65f8-tenants\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.397559 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bqv\" (UniqueName: \"kubernetes.io/projected/326140a4-6f2a-48c1-b5a2-0b02ce345c50-kube-api-access-v7bqv\") pod \"logging-loki-gateway-5f4fcfb764-rbqpr\" (UID: \"326140a4-6f2a-48c1-b5a2-0b02ce345c50\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.401023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs7q4\" (UniqueName: \"kubernetes.io/projected/653268f5-1827-4109-a68b-3cc7670e65f8-kube-api-access-cs7q4\") pod \"logging-loki-gateway-5f4fcfb764-r5mfq\" (UID: \"653268f5-1827-4109-a68b-3cc7670e65f8\") " pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.459726 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-gbf6p"] Jan 30 21:28:24 crc kubenswrapper[4751]: W0130 21:28:24.464919 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096a86f8_72dc_4bd5_a2b4_48b67a26d792.slice/crio-f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09 WatchSource:0}: Error finding container f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09: Status 404 returned error can't find the container with id f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09 Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.502203 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.677361 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.690456 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.734265 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9"] Jan 30 21:28:24 crc kubenswrapper[4751]: W0130 21:28:24.744587 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8083b036_5700_420a_ad3f_1e471813194e.slice/crio-2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54 WatchSource:0}: Error finding container 2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54: Status 404 returned error can't find the container with id 2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54 Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.771706 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.772485 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.786233 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.786522 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.797383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.827037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" event={"ID":"8083b036-5700-420a-ad3f-1e471813194e","Type":"ContainerStarted","Data":"2944e54dd81697a117e6129692f5a8574b8e85beab2799f9080c616cf3fdcc54"} Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.828102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" event={"ID":"096a86f8-72dc-4bd5-a2b4-48b67a26d792","Type":"ContainerStarted","Data":"f0a98c18dea34121b5b9774178a735f5d0b1284591395e8229ba68e9360f5f09"} Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.828923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" event={"ID":"d066c155-02e0-448e-9d4c-f578a36e553b","Type":"ContainerStarted","Data":"8d24abdf2e7ae192934f48cb88b9651e61760a7ee6004118ae7f48385ce382ca"} Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.857119 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.858587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.863541 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.864241 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.911575 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969708 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-config\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969901 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969924 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-config\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.969981 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnf5b\" (UniqueName: \"kubernetes.io/projected/4f247b61-4ba2-4c4e-8d97-c16900635ddc-kube-api-access-xnf5b\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptspc\" (UniqueName: \"kubernetes.io/projected/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-kube-api-access-ptspc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.970155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.975199 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.976246 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.977827 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.978345 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 30 21:28:24 crc kubenswrapper[4751]: I0130 21:28:24.993019 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-config\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-config\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071523 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071542 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071772 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071792 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnf5b\" (UniqueName: \"kubernetes.io/projected/4f247b61-4ba2-4c4e-8d97-c16900635ddc-kube-api-access-xnf5b\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.071813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-config\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072016 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072033 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptspc\" (UniqueName: \"kubernetes.io/projected/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-kube-api-access-ptspc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqkz\" (UniqueName: \"kubernetes.io/projected/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-kube-api-access-6kqkz\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072066 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072084 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072380 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-config\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.072990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.073398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-config\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.073767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075196 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075253 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3e5a898cde78503bcbc69e1309804887a41e9299a5bd2218ae892ae76cb82ed7/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075374 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.075440 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b845cc0feb948dbb7deae79b7b516e3f66551a47214c7ebc82092ce5df79bd36/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.077472 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.077546 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7ce2ba0f1123ef06fae8236ed24c90f356dfddc255bd41ef3c9d05613a53ed99/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.078676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.080190 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.080245 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.088123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.094544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.094938 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4f247b61-4ba2-4c4e-8d97-c16900635ddc-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.094978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnf5b\" (UniqueName: \"kubernetes.io/projected/4f247b61-4ba2-4c4e-8d97-c16900635ddc-kube-api-access-xnf5b\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.096154 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptspc\" (UniqueName: \"kubernetes.io/projected/82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76-kube-api-access-ptspc\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.109937 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7849282-d14c-4ebf-9847-4c96c23ead9f\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.115637 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-482c967c-287e-45a8-9b36-a1858d3f6deb\") pod \"logging-loki-compactor-0\" (UID: \"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76\") " pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.133709 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0b035f56-8a65-4d76-b020-d0cb55e72851\") pod \"logging-loki-ingester-0\" (UID: \"4f247b61-4ba2-4c4e-8d97-c16900635ddc\") " pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.153812 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr"] Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.173384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.173900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.173974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-config\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqkz\" (UniqueName: \"kubernetes.io/projected/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-kube-api-access-6kqkz\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174038 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.174072 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.175871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.176690 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-config\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.178149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.178662 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.178808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.185540 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.185572 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa52f9e922426d130780c9aac3e51ba77ffea30778644523bdaa2a2ecdc4e60e/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.190240 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqkz\" (UniqueName: \"kubernetes.io/projected/12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac-kube-api-access-6kqkz\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.207412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.213456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7e3292fe-fbaf-4cb3-82db-8bb5a0e7ea0a\") pod \"logging-loki-index-gateway-0\" (UID: \"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.225190 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq"] Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.290544 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.397823 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.538575 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 30 21:28:25 crc kubenswrapper[4751]: W0130 21:28:25.549813 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12f75dd3_7d12_4b19_8e7d_cfef30b3f0ac.slice/crio-ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943 WatchSource:0}: Error finding container ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943: Status 404 returned error can't find the container with id ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943 Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.651204 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 30 21:28:25 crc kubenswrapper[4751]: W0130 21:28:25.659088 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82dfa01d_f00f_4e1c_ab66_d8fbc48eaf76.slice/crio-71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3 WatchSource:0}: Error finding container 71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3: Status 404 returned error can't find the container with id 71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3 Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.843048 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" event={"ID":"653268f5-1827-4109-a68b-3cc7670e65f8","Type":"ContainerStarted","Data":"c6fbd96c4addb2dec56ee8314209c9470e2f3fbf8c73adb954d388e26fe74d5c"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.849758 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76","Type":"ContainerStarted","Data":"71386c36ec245b7811ab524647b4ad53724bc23ae264a9d5a21bc01d54027eb3"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.853646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" event={"ID":"326140a4-6f2a-48c1-b5a2-0b02ce345c50","Type":"ContainerStarted","Data":"55ae5758d9de3673fa2ac8febb3c7b25e9432bf5628cda64c986848cadb9da21"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.855412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac","Type":"ContainerStarted","Data":"ab7e4a0b643be9d17ecad85a9977d935dbb719d2107da3f2995cf2ef9d60c943"} Jan 30 21:28:25 crc kubenswrapper[4751]: I0130 21:28:25.857539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 30 21:28:26 crc kubenswrapper[4751]: I0130 21:28:26.862653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"4f247b61-4ba2-4c4e-8d97-c16900635ddc","Type":"ContainerStarted","Data":"7c41fc0b98b13bb8dece6969a16850c5d3e819c43ec0be436aef52ce6830ab92"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.876615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" event={"ID":"326140a4-6f2a-48c1-b5a2-0b02ce345c50","Type":"ContainerStarted","Data":"7340c9d6a9351437f97c77676469e7a94e839e93f944efc8e06a8aa6fe947937"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.879713 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" event={"ID":"8083b036-5700-420a-ad3f-1e471813194e","Type":"ContainerStarted","Data":"bc527c14d05287d241de1e09879a365c36c5e5456c1b761dbf065ff5b723d2f9"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.879866 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.882179 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac","Type":"ContainerStarted","Data":"33c1b45afcaae3d9bc0ff29ee0dc444f6f57d8150232ed6ed60da6485d984e7d"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.882726 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.884739 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"4f247b61-4ba2-4c4e-8d97-c16900635ddc","Type":"ContainerStarted","Data":"1a0ee61b6cebab1975995cf31396024a857abdc74a4e9e4d35dccf868f60b7a6"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.885356 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.887267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" event={"ID":"653268f5-1827-4109-a68b-3cc7670e65f8","Type":"ContainerStarted","Data":"d9cab196a72ee1a74d563246067718b01a7cecf17733523f2c73e5ea1bb34de1"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.888777 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" event={"ID":"096a86f8-72dc-4bd5-a2b4-48b67a26d792","Type":"ContainerStarted","Data":"a4211917bdb9b586412f60364cd7b7243ff8c2ad371a087663cfdbdeb51d3896"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.889273 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.895162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" event={"ID":"d066c155-02e0-448e-9d4c-f578a36e553b","Type":"ContainerStarted","Data":"ba5282dca37c18d592f18b4c9bbdbc820e9851689266de48bd0b6d9b95b790cb"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.895260 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.896791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76","Type":"ContainerStarted","Data":"d8bee72bccad9b637330d790db23fb146c116df6283a94912bda28b5d5412eaf"} Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.896938 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.904897 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" podStartSLOduration=2.416258354 podStartE2EDuration="5.904870855s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:24.747677728 +0000 UTC m=+843.493500377" lastFinishedPulling="2026-01-30 21:28:28.236290229 +0000 UTC m=+846.982112878" observedRunningTime="2026-01-30 21:28:28.902976914 +0000 UTC m=+847.648799563" watchObservedRunningTime="2026-01-30 21:28:28.904870855 +0000 UTC m=+847.650693514" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.924538 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" podStartSLOduration=2.239709826 podStartE2EDuration="5.92451768s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:24.515204156 +0000 UTC m=+843.261026805" lastFinishedPulling="2026-01-30 21:28:28.200012 +0000 UTC m=+846.945834659" observedRunningTime="2026-01-30 21:28:28.919585688 +0000 UTC m=+847.665408337" watchObservedRunningTime="2026-01-30 21:28:28.92451768 +0000 UTC m=+847.670340329" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.969265 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" podStartSLOduration=2.203128739 podStartE2EDuration="5.969240675s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:24.467934143 +0000 UTC m=+843.213756792" lastFinishedPulling="2026-01-30 21:28:28.234046079 +0000 UTC m=+846.979868728" observedRunningTime="2026-01-30 21:28:28.943428945 +0000 UTC m=+847.689251594" watchObservedRunningTime="2026-01-30 21:28:28.969240675 +0000 UTC m=+847.715063344" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.979062 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.603218001 podStartE2EDuration="5.979038117s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.865000145 +0000 UTC m=+844.610822794" lastFinishedPulling="2026-01-30 21:28:28.240820261 +0000 UTC m=+846.986642910" observedRunningTime="2026-01-30 21:28:28.965792152 +0000 UTC m=+847.711614811" watchObservedRunningTime="2026-01-30 21:28:28.979038117 +0000 UTC m=+847.724860766" Jan 30 21:28:28 crc kubenswrapper[4751]: I0130 21:28:28.999583 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.426118009 podStartE2EDuration="5.999561925s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.661268271 +0000 UTC m=+844.407090920" lastFinishedPulling="2026-01-30 21:28:28.234712177 +0000 UTC m=+846.980534836" observedRunningTime="2026-01-30 21:28:28.983159207 +0000 UTC m=+847.728981856" watchObservedRunningTime="2026-01-30 21:28:28.999561925 +0000 UTC m=+847.745384564" Jan 30 21:28:29 crc kubenswrapper[4751]: I0130 21:28:29.005594 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.324192906 podStartE2EDuration="6.005571896s" podCreationTimestamp="2026-01-30 21:28:23 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.552594387 +0000 UTC m=+844.298417036" lastFinishedPulling="2026-01-30 21:28:28.233973357 +0000 UTC m=+846.979796026" observedRunningTime="2026-01-30 21:28:29.004051905 +0000 UTC m=+847.749874554" watchObservedRunningTime="2026-01-30 21:28:29.005571896 +0000 UTC m=+847.751394545" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.916260 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" event={"ID":"653268f5-1827-4109-a68b-3cc7670e65f8","Type":"ContainerStarted","Data":"dcbe57c86897fa5801e7854f0c7ed93a8b1bba3fa8aace2d9fcdfac51a32cef6"} Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.916808 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.916862 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.918611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" event={"ID":"326140a4-6f2a-48c1-b5a2-0b02ce345c50","Type":"ContainerStarted","Data":"fbf7eb6d5ff5c96ac30c4b488638e2bc644bc299c8e0390cb6700a3c5a4f0a2d"} Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.926636 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.932714 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.944896 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-r5mfq" podStartSLOduration=1.538743586 podStartE2EDuration="6.944880727s" podCreationTimestamp="2026-01-30 21:28:24 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.231037614 +0000 UTC m=+843.976860283" lastFinishedPulling="2026-01-30 21:28:30.637174765 +0000 UTC m=+849.382997424" observedRunningTime="2026-01-30 21:28:30.94238348 +0000 UTC m=+849.688206149" watchObservedRunningTime="2026-01-30 21:28:30.944880727 +0000 UTC m=+849.690703376" Jan 30 21:28:30 crc kubenswrapper[4751]: I0130 21:28:30.967268 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" podStartSLOduration=1.504712608 podStartE2EDuration="6.967252975s" podCreationTimestamp="2026-01-30 21:28:24 +0000 UTC" firstStartedPulling="2026-01-30 21:28:25.15900875 +0000 UTC m=+843.904831389" lastFinishedPulling="2026-01-30 21:28:30.621549097 +0000 UTC m=+849.367371756" observedRunningTime="2026-01-30 21:28:30.9621938 +0000 UTC m=+849.708016449" watchObservedRunningTime="2026-01-30 21:28:30.967252975 +0000 UTC m=+849.713075624" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.929268 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.929784 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.946921 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:31 crc kubenswrapper[4751]: I0130 21:28:31.948192 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f4fcfb764-rbqpr" Jan 30 21:28:43 crc kubenswrapper[4751]: I0130 21:28:43.984235 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-mc9wc" Jan 30 21:28:44 crc kubenswrapper[4751]: I0130 21:28:44.121354 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-gbf6p" Jan 30 21:28:44 crc kubenswrapper[4751]: I0130 21:28:44.245914 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-7cdp9" Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.214905 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.317884 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.405058 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 30 21:28:45 crc kubenswrapper[4751]: I0130 21:28:45.405104 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:28:54 crc kubenswrapper[4751]: I0130 21:28:54.126689 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:28:54 crc kubenswrapper[4751]: I0130 21:28:54.127435 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:28:55 crc kubenswrapper[4751]: I0130 21:28:55.417569 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 30 21:28:55 crc kubenswrapper[4751]: I0130 21:28:55.417652 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:29:05 crc kubenswrapper[4751]: I0130 21:29:05.404413 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 30 21:29:05 crc kubenswrapper[4751]: I0130 21:29:05.405051 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:29:15 crc kubenswrapper[4751]: I0130 21:29:15.402669 4751 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 30 21:29:15 crc kubenswrapper[4751]: I0130 21:29:15.403546 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="4f247b61-4ba2-4c4e-8d97-c16900635ddc" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:29:24 crc kubenswrapper[4751]: I0130 21:29:24.127030 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:29:24 crc kubenswrapper[4751]: I0130 21:29:24.127678 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:29:25 crc kubenswrapper[4751]: I0130 21:29:25.403004 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.524085 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.525908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.534830 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.536870 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-jp6fq" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.536974 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.537027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.536983 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.538165 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.548237 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570876 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570943 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.570978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571003 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571074 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571153 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.571347 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.616446 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:42 crc kubenswrapper[4751]: E0130 21:29:42.617110 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-n7pq2 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-fs9d8" podUID="b6746a54-2590-4b31-99ef-332ede51c384" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672342 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672401 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672474 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672504 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672529 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.672552 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.673797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: E0130 21:29:42.673982 4751 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 30 21:29:42 crc kubenswrapper[4751]: E0130 21:29:42.674041 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver podName:b6746a54-2590-4b31-99ef-332ede51c384 nodeName:}" failed. No retries permitted until 2026-01-30 21:29:43.174025958 +0000 UTC m=+921.919848597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver") pod "collector-fs9d8" (UID: "b6746a54-2590-4b31-99ef-332ede51c384") : secret "collector-syslog-receiver" not found Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.674468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.674583 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.677359 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.678509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.678758 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.695116 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:42 crc kubenswrapper[4751]: I0130 21:29:42.695213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.186012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.194101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"collector-fs9d8\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.558631 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.570978 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.694931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695073 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695253 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695355 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695518 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.695596 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") pod \"b6746a54-2590-4b31-99ef-332ede51c384\" (UID: \"b6746a54-2590-4b31-99ef-332ede51c384\") " Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696222 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config" (OuterVolumeSpecName: "config") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696240 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir" (OuterVolumeSpecName: "datadir") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696270 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.696461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.697368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.700778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.702095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp" (OuterVolumeSpecName: "tmp") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.702131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token" (OuterVolumeSpecName: "collector-token") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.703234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2" (OuterVolumeSpecName: "kube-api-access-n7pq2") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "kube-api-access-n7pq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.711543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics" (OuterVolumeSpecName: "metrics") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.713857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token" (OuterVolumeSpecName: "sa-token") pod "b6746a54-2590-4b31-99ef-332ede51c384" (UID: "b6746a54-2590-4b31-99ef-332ede51c384"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797582 4751 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6746a54-2590-4b31-99ef-332ede51c384-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797655 4751 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797686 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797712 4751 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797739 4751 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-token\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797767 4751 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b6746a54-2590-4b31-99ef-332ede51c384-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797793 4751 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797813 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7pq2\" (UniqueName: \"kubernetes.io/projected/b6746a54-2590-4b31-99ef-332ede51c384-kube-api-access-n7pq2\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797832 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797849 4751 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b6746a54-2590-4b31-99ef-332ede51c384-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:43 crc kubenswrapper[4751]: I0130 21:29:43.797871 4751 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b6746a54-2590-4b31-99ef-332ede51c384-datadir\") on node \"crc\" DevicePath \"\"" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.563478 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fs9d8" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.611305 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.623739 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-fs9d8"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.633386 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-f6llv"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.634714 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.638788 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639010 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639309 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639401 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-jp6fq" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.639393 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.646908 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.652548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-f6llv"] Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711387 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2w7\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-kube-api-access-5z2w7\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-syslog-receiver\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-metrics\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-entrypoint\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711631 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d1f22c66-daa2-4dd7-8394-ceab983464e2-datadir\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711708 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-trusted-ca\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711761 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config-openshift-service-cacrt\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711811 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-sa-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.711908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1f22c66-daa2-4dd7-8394-ceab983464e2-tmp\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.814013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2w7\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-kube-api-access-5z2w7\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.814598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-syslog-receiver\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.814852 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-metrics\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815474 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-entrypoint\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d1f22c66-daa2-4dd7-8394-ceab983464e2-datadir\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.815937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-trusted-ca\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816144 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config-openshift-service-cacrt\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816371 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-sa-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1f22c66-daa2-4dd7-8394-ceab983464e2-tmp\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.817869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-trusted-ca\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.817944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d1f22c66-daa2-4dd7-8394-ceab983464e2-datadir\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.816589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-entrypoint\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.818644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.819574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d1f22c66-daa2-4dd7-8394-ceab983464e2-config-openshift-service-cacrt\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.820574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1f22c66-daa2-4dd7-8394-ceab983464e2-tmp\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.821057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-syslog-receiver\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.823218 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-collector-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.824874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d1f22c66-daa2-4dd7-8394-ceab983464e2-metrics\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.840500 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2w7\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-kube-api-access-5z2w7\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.846138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d1f22c66-daa2-4dd7-8394-ceab983464e2-sa-token\") pod \"collector-f6llv\" (UID: \"d1f22c66-daa2-4dd7-8394-ceab983464e2\") " pod="openshift-logging/collector-f6llv" Jan 30 21:29:44 crc kubenswrapper[4751]: I0130 21:29:44.950417 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-f6llv" Jan 30 21:29:45 crc kubenswrapper[4751]: I0130 21:29:45.427892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-f6llv"] Jan 30 21:29:45 crc kubenswrapper[4751]: W0130 21:29:45.450532 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f22c66_daa2_4dd7_8394_ceab983464e2.slice/crio-827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370 WatchSource:0}: Error finding container 827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370: Status 404 returned error can't find the container with id 827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370 Jan 30 21:29:45 crc kubenswrapper[4751]: I0130 21:29:45.574211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-f6llv" event={"ID":"d1f22c66-daa2-4dd7-8394-ceab983464e2","Type":"ContainerStarted","Data":"827bb56764fc7da48376796d3fb2ffc010f527570d6089a054ae159c6450c370"} Jan 30 21:29:45 crc kubenswrapper[4751]: I0130 21:29:45.991475 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6746a54-2590-4b31-99ef-332ede51c384" path="/var/lib/kubelet/pods/b6746a54-2590-4b31-99ef-332ede51c384/volumes" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.188170 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.189873 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.195153 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.261932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.261993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.262023 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.363562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.363689 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.363778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.364401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.364779 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.389088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"redhat-marketplace-8clf6\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.511515 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.634572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-f6llv" event={"ID":"d1f22c66-daa2-4dd7-8394-ceab983464e2","Type":"ContainerStarted","Data":"8afd54fb03a09aa06a3be18abbe11b4261679a32c0b42df1754fb5bbbe369bc6"} Jan 30 21:29:52 crc kubenswrapper[4751]: I0130 21:29:52.667532 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-f6llv" podStartSLOduration=2.1333414729999998 podStartE2EDuration="8.667511673s" podCreationTimestamp="2026-01-30 21:29:44 +0000 UTC" firstStartedPulling="2026-01-30 21:29:45.452263941 +0000 UTC m=+924.198086590" lastFinishedPulling="2026-01-30 21:29:51.986434141 +0000 UTC m=+930.732256790" observedRunningTime="2026-01-30 21:29:52.656246541 +0000 UTC m=+931.402069190" watchObservedRunningTime="2026-01-30 21:29:52.667511673 +0000 UTC m=+931.413334322" Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.027746 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.643253 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" exitCode=0 Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.643351 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076"} Jan 30 21:29:53 crc kubenswrapper[4751]: I0130 21:29:53.643692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerStarted","Data":"e11d99f981d4367e383246d739931587292a66f139c6473cdf0a79c78b681de8"} Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.127186 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.127262 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.127311 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.128097 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.128169 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b" gracePeriod=600 Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.652106 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b" exitCode=0 Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.652187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b"} Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.653208 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234"} Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.653247 4751 scope.go:117] "RemoveContainer" containerID="a610754d75a118a60637a1e554575fc5a5a243d54c20205f4fedf2c00e804266" Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.656257 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" exitCode=0 Jan 30 21:29:54 crc kubenswrapper[4751]: I0130 21:29:54.656308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675"} Jan 30 21:29:55 crc kubenswrapper[4751]: I0130 21:29:55.666938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerStarted","Data":"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3"} Jan 30 21:29:55 crc kubenswrapper[4751]: I0130 21:29:55.685972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8clf6" podStartSLOduration=2.315184462 podStartE2EDuration="3.685955217s" podCreationTimestamp="2026-01-30 21:29:52 +0000 UTC" firstStartedPulling="2026-01-30 21:29:53.646100492 +0000 UTC m=+932.391923141" lastFinishedPulling="2026-01-30 21:29:55.016871247 +0000 UTC m=+933.762693896" observedRunningTime="2026-01-30 21:29:55.683416729 +0000 UTC m=+934.429239398" watchObservedRunningTime="2026-01-30 21:29:55.685955217 +0000 UTC m=+934.431777876" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.181561 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.183053 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.185293 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.186159 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.195675 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.307013 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.307067 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.307250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.409063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.409162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.409192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.410495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.417476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.432925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"collect-profiles-29496810-rblx8\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.507900 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:00 crc kubenswrapper[4751]: I0130 21:30:00.978941 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 21:30:01 crc kubenswrapper[4751]: I0130 21:30:01.711382 4751 generic.go:334] "Generic (PLEG): container finished" podID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerID="664bdfce98a1e87d41664b73e411b35da3c4e69be04f5631e859fc26af9552e4" exitCode=0 Jan 30 21:30:01 crc kubenswrapper[4751]: I0130 21:30:01.711483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" event={"ID":"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6","Type":"ContainerDied","Data":"664bdfce98a1e87d41664b73e411b35da3c4e69be04f5631e859fc26af9552e4"} Jan 30 21:30:01 crc kubenswrapper[4751]: I0130 21:30:01.711667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" event={"ID":"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6","Type":"ContainerStarted","Data":"aae75191e239c3a86e97cbe6355a3766cbfeb781c33b0b00aeccf9b73c16953c"} Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.511629 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.511682 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.565903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.766553 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.825973 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:30:02 crc kubenswrapper[4751]: I0130 21:30:02.991860 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.154108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") pod \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.155155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") pod \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.155194 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") pod \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\" (UID: \"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6\") " Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.156957 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" (UID: "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.157603 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.162251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" (UID: "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.163470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl" (OuterVolumeSpecName: "kube-api-access-nwqnl") pod "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" (UID: "44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6"). InnerVolumeSpecName "kube-api-access-nwqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.259255 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.259297 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwqnl\" (UniqueName: \"kubernetes.io/projected/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6-kube-api-access-nwqnl\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.738438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" event={"ID":"44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6","Type":"ContainerDied","Data":"aae75191e239c3a86e97cbe6355a3766cbfeb781c33b0b00aeccf9b73c16953c"} Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.738509 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae75191e239c3a86e97cbe6355a3766cbfeb781c33b0b00aeccf9b73c16953c" Jan 30 21:30:03 crc kubenswrapper[4751]: I0130 21:30:03.738451 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8" Jan 30 21:30:04 crc kubenswrapper[4751]: I0130 21:30:04.749259 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8clf6" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" containerID="cri-o://d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" gracePeriod=2 Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.250228 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.421469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") pod \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.421543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") pod \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.421637 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") pod \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\" (UID: \"cc5cceb1-605d-4f28-a5ed-a70292156bf4\") " Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.423124 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities" (OuterVolumeSpecName: "utilities") pod "cc5cceb1-605d-4f28-a5ed-a70292156bf4" (UID: "cc5cceb1-605d-4f28-a5ed-a70292156bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.432609 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9" (OuterVolumeSpecName: "kube-api-access-9dfn9") pod "cc5cceb1-605d-4f28-a5ed-a70292156bf4" (UID: "cc5cceb1-605d-4f28-a5ed-a70292156bf4"). InnerVolumeSpecName "kube-api-access-9dfn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.461431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc5cceb1-605d-4f28-a5ed-a70292156bf4" (UID: "cc5cceb1-605d-4f28-a5ed-a70292156bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.524740 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.524811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfn9\" (UniqueName: \"kubernetes.io/projected/cc5cceb1-605d-4f28-a5ed-a70292156bf4-kube-api-access-9dfn9\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.524842 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5cceb1-605d-4f28-a5ed-a70292156bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761695 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" exitCode=0 Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761750 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3"} Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8clf6" event={"ID":"cc5cceb1-605d-4f28-a5ed-a70292156bf4","Type":"ContainerDied","Data":"e11d99f981d4367e383246d739931587292a66f139c6473cdf0a79c78b681de8"} Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761833 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8clf6" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.761850 4751 scope.go:117] "RemoveContainer" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.789401 4751 scope.go:117] "RemoveContainer" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.820885 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.831268 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8clf6"] Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.849611 4751 scope.go:117] "RemoveContainer" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.866829 4751 scope.go:117] "RemoveContainer" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" Jan 30 21:30:05 crc kubenswrapper[4751]: E0130 21:30:05.867300 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3\": container with ID starting with d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3 not found: ID does not exist" containerID="d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867349 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3"} err="failed to get container status \"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3\": rpc error: code = NotFound desc = could not find container \"d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3\": container with ID starting with d7cd20743070613bbd66e5062d4425c242e25d1a9f894acd5a527d8f0b0d83c3 not found: ID does not exist" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867374 4751 scope.go:117] "RemoveContainer" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" Jan 30 21:30:05 crc kubenswrapper[4751]: E0130 21:30:05.867660 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675\": container with ID starting with 48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675 not found: ID does not exist" containerID="48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867679 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675"} err="failed to get container status \"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675\": rpc error: code = NotFound desc = could not find container \"48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675\": container with ID starting with 48cf0080566ee6f63f70d0c98cdec8939b9b7b696d58ddd2f3c43693635ec675 not found: ID does not exist" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867690 4751 scope.go:117] "RemoveContainer" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" Jan 30 21:30:05 crc kubenswrapper[4751]: E0130 21:30:05.867921 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076\": container with ID starting with 036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076 not found: ID does not exist" containerID="036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.867943 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076"} err="failed to get container status \"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076\": rpc error: code = NotFound desc = could not find container \"036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076\": container with ID starting with 036af09cf7aab0d70c3f63156246b04080c96a1dc0b0a08ef2d673f3a545b076 not found: ID does not exist" Jan 30 21:30:05 crc kubenswrapper[4751]: I0130 21:30:05.988180 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" path="/var/lib/kubelet/pods/cc5cceb1-605d-4f28-a5ed-a70292156bf4/volumes" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.629966 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631644 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631682 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631768 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerName="collect-profiles" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631818 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerName="collect-profiles" Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631861 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-utilities" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631910 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-utilities" Jan 30 21:30:08 crc kubenswrapper[4751]: E0130 21:30:08.631946 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-content" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.631957 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="extract-content" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.633074 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" containerName="collect-profiles" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.633160 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5cceb1-605d-4f28-a5ed-a70292156bf4" containerName="registry-server" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.637317 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.641784 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.680782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.680816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.680976 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.781674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.781820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.781854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.782578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.782580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.811622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"certified-operators-cqsvg\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:08 crc kubenswrapper[4751]: I0130 21:30:08.984449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.487774 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:09 crc kubenswrapper[4751]: W0130 21:30:09.497246 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod658cd3aa_28fd_4fdd_bbce_ab07effcdc0b.slice/crio-fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc WatchSource:0}: Error finding container fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc: Status 404 returned error can't find the container with id fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.798736 4751 generic.go:334] "Generic (PLEG): container finished" podID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerID="ef163a3afba5ac1eb335431aa2395ea6c9a037884a732e5ca52e5147972cf403" exitCode=0 Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.799003 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"ef163a3afba5ac1eb335431aa2395ea6c9a037884a732e5ca52e5147972cf403"} Jan 30 21:30:09 crc kubenswrapper[4751]: I0130 21:30:09.799146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerStarted","Data":"fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc"} Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.199109 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.201144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.228169 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.377772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.377912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.377967 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.479551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.479813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.479859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.480157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.480213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.512866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"community-operators-82fwr\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.572700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.850853 4751 generic.go:334] "Generic (PLEG): container finished" podID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerID="aed57bf3af8a9be4354110c686278989c4d522de65d68ab7801998493854f3c7" exitCode=0 Jan 30 21:30:14 crc kubenswrapper[4751]: I0130 21:30:14.852308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"aed57bf3af8a9be4354110c686278989c4d522de65d68ab7801998493854f3c7"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.067991 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:15 crc kubenswrapper[4751]: W0130 21:30:15.084463 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f294d30_8a47_4f81_930d_3c0bbf564a2e.slice/crio-113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9 WatchSource:0}: Error finding container 113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9: Status 404 returned error can't find the container with id 113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9 Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.863674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerStarted","Data":"14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.866415 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" exitCode=0 Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.866461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.866528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerStarted","Data":"113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9"} Jan 30 21:30:15 crc kubenswrapper[4751]: I0130 21:30:15.883519 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cqsvg" podStartSLOduration=2.416073276 podStartE2EDuration="7.883493064s" podCreationTimestamp="2026-01-30 21:30:08 +0000 UTC" firstStartedPulling="2026-01-30 21:30:09.802017172 +0000 UTC m=+948.547839851" lastFinishedPulling="2026-01-30 21:30:15.26943699 +0000 UTC m=+954.015259639" observedRunningTime="2026-01-30 21:30:15.881755188 +0000 UTC m=+954.627577877" watchObservedRunningTime="2026-01-30 21:30:15.883493064 +0000 UTC m=+954.629315763" Jan 30 21:30:18 crc kubenswrapper[4751]: I0130 21:30:18.985359 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:18 crc kubenswrapper[4751]: I0130 21:30:18.985886 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:19 crc kubenswrapper[4751]: I0130 21:30:19.065779 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:20 crc kubenswrapper[4751]: I0130 21:30:20.918699 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" exitCode=0 Jan 30 21:30:20 crc kubenswrapper[4751]: I0130 21:30:20.918802 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.463387 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.470792 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2h4zhg"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.476919 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.483699 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bkzwvd"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.489285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.496395 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gwpv2"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.500175 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.500414 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cqsvg" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" containerID="cri-o://14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.511781 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.517819 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.521854 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078 is running failed: container process not found" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 21:30:21 crc kubenswrapper[4751]: E0130 21:30:21.522112 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-cqsvg" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.528284 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.529020 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-twcnd" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" containerID="cri-o://da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.545272 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.559498 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.559801 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bd2xs" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" containerID="cri-o://960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.570237 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.570545 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" containerID="cri-o://6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.576450 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.576730 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btf57" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" containerID="cri-o://7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.584367 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s9tfl"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.585304 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.591113 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.591449 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p4hxc" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" containerID="cri-o://a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce" gracePeriod=30 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.596649 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s9tfl"] Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.700390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.700461 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.700619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/7804f857-fb14-4305-97cc-c966621a55b2-kube-api-access-7666v\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.802735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/7804f857-fb14-4305-97cc-c966621a55b2-kube-api-access-7666v\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.802854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.802951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.807490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.812931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7804f857-fb14-4305-97cc-c966621a55b2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.825157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7666v\" (UniqueName: \"kubernetes.io/projected/7804f857-fb14-4305-97cc-c966621a55b2-kube-api-access-7666v\") pod \"marketplace-operator-79b997595-s9tfl\" (UID: \"7804f857-fb14-4305-97cc-c966621a55b2\") " pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.938688 4751 generic.go:334] "Generic (PLEG): container finished" podID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerID="6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.938783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerDied","Data":"6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.944392 4751 generic.go:334] "Generic (PLEG): container finished" podID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerID="960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.944473 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.946690 4751 generic.go:334] "Generic (PLEG): container finished" podID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.946745 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.949902 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerID="da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.949969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.954553 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerID="7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.954692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.957785 4751 generic.go:334] "Generic (PLEG): container finished" podID="448ce159-6181-433b-a28a-d00b9240b5af" containerID="a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce" exitCode=0 Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.957834 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce"} Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.996465 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cbca202-59f0-4772-a82c-8c448cbc4c70" path="/var/lib/kubelet/pods/1cbca202-59f0-4772-a82c-8c448cbc4c70/volumes" Jan 30 21:30:21 crc kubenswrapper[4751]: I0130 21:30:21.998037 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2d9dc5-eee5-4e7f-86dd-9b7eb581429e" path="/var/lib/kubelet/pods/de2d9dc5-eee5-4e7f-86dd-9b7eb581429e/volumes" Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.001472 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedbc66c-13e3-4312-85e6-00d215e5f2ff" path="/var/lib/kubelet/pods/dedbc66c-13e3-4312-85e6-00d215e5f2ff/volumes" Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.079690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.569665 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s9tfl"] Jan 30 21:30:22 crc kubenswrapper[4751]: W0130 21:30:22.579969 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7804f857_fb14_4305_97cc_c966621a55b2.slice/crio-19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf WatchSource:0}: Error finding container 19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf: Status 404 returned error can't find the container with id 19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.987103 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" event={"ID":"7804f857-fb14-4305-97cc-c966621a55b2","Type":"ContainerStarted","Data":"fd26b0bda021a3fb2c5013c401bf33aa0285fad6011eea6cb8dab5b9f4ad458c"} Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.987145 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" event={"ID":"7804f857-fb14-4305-97cc-c966621a55b2","Type":"ContainerStarted","Data":"19ad62459394104724127966dc1efbd3069eca9ab57c628b2cec1833d0051aaf"} Jan 30 21:30:22 crc kubenswrapper[4751]: I0130 21:30:22.992724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerStarted","Data":"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88"} Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.220980 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.306620 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.337426 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343571 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") pod \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343647 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") pod \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343668 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") pod \"6a24b1f1-0656-41ef-826d-c6c40f96b470\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") pod \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\" (UID: \"cdcb33b0-97a6-4ded-96b6-1c5bd9053977\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343763 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") pod \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343805 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") pod \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343858 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") pod \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\" (UID: \"ac49c6a1-fa74-49f3-ba94-c5a469df4a93\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343907 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") pod \"6a24b1f1-0656-41ef-826d-c6c40f96b470\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.343926 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") pod \"6a24b1f1-0656-41ef-826d-c6c40f96b470\" (UID: \"6a24b1f1-0656-41ef-826d-c6c40f96b470\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.345132 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities" (OuterVolumeSpecName: "utilities") pod "6a24b1f1-0656-41ef-826d-c6c40f96b470" (UID: "6a24b1f1-0656-41ef-826d-c6c40f96b470"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.345790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cdcb33b0-97a6-4ded-96b6-1c5bd9053977" (UID: "cdcb33b0-97a6-4ded-96b6-1c5bd9053977"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.346145 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities" (OuterVolumeSpecName: "utilities") pod "ac49c6a1-fa74-49f3-ba94-c5a469df4a93" (UID: "ac49c6a1-fa74-49f3-ba94-c5a469df4a93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.350365 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q" (OuterVolumeSpecName: "kube-api-access-qqk8q") pod "ac49c6a1-fa74-49f3-ba94-c5a469df4a93" (UID: "ac49c6a1-fa74-49f3-ba94-c5a469df4a93"). InnerVolumeSpecName "kube-api-access-qqk8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.351587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4" (OuterVolumeSpecName: "kube-api-access-gnnj4") pod "cdcb33b0-97a6-4ded-96b6-1c5bd9053977" (UID: "cdcb33b0-97a6-4ded-96b6-1c5bd9053977"). InnerVolumeSpecName "kube-api-access-gnnj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.354737 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cdcb33b0-97a6-4ded-96b6-1c5bd9053977" (UID: "cdcb33b0-97a6-4ded-96b6-1c5bd9053977"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.359536 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.369119 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.377374 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj" (OuterVolumeSpecName: "kube-api-access-z8stj") pod "6a24b1f1-0656-41ef-826d-c6c40f96b470" (UID: "6a24b1f1-0656-41ef-826d-c6c40f96b470"). InnerVolumeSpecName "kube-api-access-z8stj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.394194 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a24b1f1-0656-41ef-826d-c6c40f96b470" (UID: "6a24b1f1-0656-41ef-826d-c6c40f96b470"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.435868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac49c6a1-fa74-49f3-ba94-c5a469df4a93" (UID: "ac49c6a1-fa74-49f3-ba94-c5a469df4a93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.473156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") pod \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.473734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") pod \"448ce159-6181-433b-a28a-d00b9240b5af\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474107 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") pod \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474152 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") pod \"448ce159-6181-433b-a28a-d00b9240b5af\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474210 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") pod \"448ce159-6181-433b-a28a-d00b9240b5af\" (UID: \"448ce159-6181-433b-a28a-d00b9240b5af\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474236 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") pod \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\" (UID: \"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474608 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474626 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a24b1f1-0656-41ef-826d-c6c40f96b470-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474637 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnnj4\" (UniqueName: \"kubernetes.io/projected/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-kube-api-access-gnnj4\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474649 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474658 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8stj\" (UniqueName: \"kubernetes.io/projected/6a24b1f1-0656-41ef-826d-c6c40f96b470-kube-api-access-z8stj\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474667 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdcb33b0-97a6-4ded-96b6-1c5bd9053977-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474675 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474684 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqk8q\" (UniqueName: \"kubernetes.io/projected/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-kube-api-access-qqk8q\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.474692 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac49c6a1-fa74-49f3-ba94-c5a469df4a93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.476028 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities" (OuterVolumeSpecName: "utilities") pod "448ce159-6181-433b-a28a-d00b9240b5af" (UID: "448ce159-6181-433b-a28a-d00b9240b5af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.476616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg" (OuterVolumeSpecName: "kube-api-access-8d6jg") pod "448ce159-6181-433b-a28a-d00b9240b5af" (UID: "448ce159-6181-433b-a28a-d00b9240b5af"). InnerVolumeSpecName "kube-api-access-8d6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.476766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities" (OuterVolumeSpecName: "utilities") pod "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" (UID: "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.478124 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn" (OuterVolumeSpecName: "kube-api-access-d7gzn") pod "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" (UID: "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b"). InnerVolumeSpecName "kube-api-access-d7gzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.529119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" (UID: "658cd3aa-28fd-4fdd-bbce-ab07effcdc0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575783 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575814 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575826 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575837 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7gzn\" (UniqueName: \"kubernetes.io/projected/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b-kube-api-access-d7gzn\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.575846 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d6jg\" (UniqueName: \"kubernetes.io/projected/448ce159-6181-433b-a28a-d00b9240b5af-kube-api-access-8d6jg\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.592567 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "448ce159-6181-433b-a28a-d00b9240b5af" (UID: "448ce159-6181-433b-a28a-d00b9240b5af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.627265 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.676824 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") pod \"a791b2a3-aead-4130-bdfa-e219f2d47593\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.676899 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") pod \"a791b2a3-aead-4130-bdfa-e219f2d47593\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.677039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") pod \"a791b2a3-aead-4130-bdfa-e219f2d47593\" (UID: \"a791b2a3-aead-4130-bdfa-e219f2d47593\") " Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.677385 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/448ce159-6181-433b-a28a-d00b9240b5af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.677971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities" (OuterVolumeSpecName: "utilities") pod "a791b2a3-aead-4130-bdfa-e219f2d47593" (UID: "a791b2a3-aead-4130-bdfa-e219f2d47593"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.680125 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4" (OuterVolumeSpecName: "kube-api-access-7kcw4") pod "a791b2a3-aead-4130-bdfa-e219f2d47593" (UID: "a791b2a3-aead-4130-bdfa-e219f2d47593"). InnerVolumeSpecName "kube-api-access-7kcw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.725216 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a791b2a3-aead-4130-bdfa-e219f2d47593" (UID: "a791b2a3-aead-4130-bdfa-e219f2d47593"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.779295 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.779339 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kcw4\" (UniqueName: \"kubernetes.io/projected/a791b2a3-aead-4130-bdfa-e219f2d47593-kube-api-access-7kcw4\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:23 crc kubenswrapper[4751]: I0130 21:30:23.779351 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a791b2a3-aead-4130-bdfa-e219f2d47593-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.007227 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqsvg" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.007288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqsvg" event={"ID":"658cd3aa-28fd-4fdd-bbce-ab07effcdc0b","Type":"ContainerDied","Data":"fe5fcfb1fc44ccdf6c4a741d4d16f63708aa615ae6f06f1cd0921da957c348dc"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.007343 4751 scope.go:117] "RemoveContainer" containerID="14dc0debd4e467cc635d37215c68233957a35c1f4ea5e8648c5f3af34e750078" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.012544 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twcnd" event={"ID":"ac49c6a1-fa74-49f3-ba94-c5a469df4a93","Type":"ContainerDied","Data":"1789e26083b6d1b5bcaf1c28e823207fbb2d904374cfefdfc648a991a801687a"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.012872 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twcnd" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.020829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btf57" event={"ID":"6a24b1f1-0656-41ef-826d-c6c40f96b470","Type":"ContainerDied","Data":"e2bcb606a7c5b09ddbf81239b5c38048d9cdd9c5406a2b4fba58566803a5b46b"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.020832 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btf57" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.048435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4hxc" event={"ID":"448ce159-6181-433b-a28a-d00b9240b5af","Type":"ContainerDied","Data":"73ad198924f3e1eade0a31a7aa5614d242dcbb52f38d6f5161410d402c09b507"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.048503 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4hxc" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.055274 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd2xs" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.055372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd2xs" event={"ID":"a791b2a3-aead-4130-bdfa-e219f2d47593","Type":"ContainerDied","Data":"434124a9abf250cc8847456cd8dfc444504ed2c5f79f019406475bd5e02dd626"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.064288 4751 scope.go:117] "RemoveContainer" containerID="aed57bf3af8a9be4354110c686278989c4d522de65d68ab7801998493854f3c7" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.064447 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.065978 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" event={"ID":"cdcb33b0-97a6-4ded-96b6-1c5bd9053977","Type":"ContainerDied","Data":"644b46c2a2fab923799c15a7a1cf7953e3083f14e5f91214c52144072cb6a7fb"} Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.066061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-76rml" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.066140 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-82fwr" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" containerID="cri-o://e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" gracePeriod=30 Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.066481 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.073409 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.075593 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cqsvg"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.081812 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.092263 4751 scope.go:117] "RemoveContainer" containerID="ef163a3afba5ac1eb335431aa2395ea6c9a037884a732e5ca52e5147972cf403" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.093920 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-twcnd"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.101509 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.108141 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btf57"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.117536 4751 scope.go:117] "RemoveContainer" containerID="da576de8f1c9a0effcc2ee958957d7e00c5cf40151114d08518c5b0c29f0fc29" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.118304 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.123678 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bd2xs"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.125219 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s9tfl" podStartSLOduration=3.12520214 podStartE2EDuration="3.12520214s" podCreationTimestamp="2026-01-30 21:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:30:24.108650447 +0000 UTC m=+962.854473096" watchObservedRunningTime="2026-01-30 21:30:24.12520214 +0000 UTC m=+962.871024789" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.133369 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.135188 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p4hxc"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.139117 4751 scope.go:117] "RemoveContainer" containerID="2c82591f69d50ae83fda7597991bd617784911392dd33cf4f25ec660904d8e1e" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.154918 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-82fwr" podStartSLOduration=4.670945385 podStartE2EDuration="10.154892876s" podCreationTimestamp="2026-01-30 21:30:14 +0000 UTC" firstStartedPulling="2026-01-30 21:30:15.870128727 +0000 UTC m=+954.615951376" lastFinishedPulling="2026-01-30 21:30:21.354076188 +0000 UTC m=+960.099898867" observedRunningTime="2026-01-30 21:30:24.150411616 +0000 UTC m=+962.896234275" watchObservedRunningTime="2026-01-30 21:30:24.154892876 +0000 UTC m=+962.900715535" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.169428 4751 scope.go:117] "RemoveContainer" containerID="bb7468b7d7c0079e6174ab6fab8062e8d6fe8734e0fcc33a217d950b9c4934f4" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.177687 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.183840 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-76rml"] Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.246058 4751 scope.go:117] "RemoveContainer" containerID="7b0cb114f2b94c0af64389530dc0e77b4ef4178db18be6009544673f334a8088" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.259462 4751 scope.go:117] "RemoveContainer" containerID="b5fe421c84c49a0fce9c766932eb37dc6ebd8f10a339e43911d566e5bf55820f" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.273784 4751 scope.go:117] "RemoveContainer" containerID="bbac4a5fe3fc00609faebe7f98affa8ef8408a492e79ad4eb2e51f42853acfd7" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.298806 4751 scope.go:117] "RemoveContainer" containerID="a6b343dd72b5235871a55e1a2c2def12bf611b5a0982df3c0c87934e222e51ce" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.317557 4751 scope.go:117] "RemoveContainer" containerID="8c6af28f5b6624524db9760ac1812d7255cfb7aa12f4c630cb631a41508a66c5" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.340521 4751 scope.go:117] "RemoveContainer" containerID="10d985df0a9120f84aedb7a8499aa2e73fa1eb168ac9332a258bbeadbd76d96e" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.357953 4751 scope.go:117] "RemoveContainer" containerID="960f258866c4433eda726c04fdab80b057c22c0920935676513c71ebdb592216" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.376707 4751 scope.go:117] "RemoveContainer" containerID="046ce5e1f77fe5269aa0733495a774c7014a135ba89622c5ae3b5e42a5e2bcc2" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.396150 4751 scope.go:117] "RemoveContainer" containerID="ab432e40787fc0f8c27455630b3e162f083b0e2d799d4a3e7e2a6dfb88ac3b16" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.416915 4751 scope.go:117] "RemoveContainer" containerID="6555bc08329fa2ff543d4810ec47f9a72956f19cbf66209a9749cc91438e7744" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.487352 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82fwr_3f294d30-8a47-4f81-930d-3c0bbf564a2e/registry-server/0.log" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.488240 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.594164 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") pod \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.594233 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") pod \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.594272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") pod \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\" (UID: \"3f294d30-8a47-4f81-930d-3c0bbf564a2e\") " Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.595149 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities" (OuterVolumeSpecName: "utilities") pod "3f294d30-8a47-4f81-930d-3c0bbf564a2e" (UID: "3f294d30-8a47-4f81-930d-3c0bbf564a2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.600772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst" (OuterVolumeSpecName: "kube-api-access-ksmst") pod "3f294d30-8a47-4f81-930d-3c0bbf564a2e" (UID: "3f294d30-8a47-4f81-930d-3c0bbf564a2e"). InnerVolumeSpecName "kube-api-access-ksmst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.657558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f294d30-8a47-4f81-930d-3c0bbf564a2e" (UID: "3f294d30-8a47-4f81-930d-3c0bbf564a2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.697011 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmst\" (UniqueName: \"kubernetes.io/projected/3f294d30-8a47-4f81-930d-3c0bbf564a2e-kube-api-access-ksmst\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.697418 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:24 crc kubenswrapper[4751]: I0130 21:30:24.697433 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f294d30-8a47-4f81-930d-3c0bbf564a2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.088248 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-82fwr_3f294d30-8a47-4f81-930d-3c0bbf564a2e/registry-server/0.log" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.089971 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" exitCode=1 Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090079 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82fwr" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090110 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88"} Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82fwr" event={"ID":"3f294d30-8a47-4f81-930d-3c0bbf564a2e","Type":"ContainerDied","Data":"113c2bb9a9bf499f765a898393b70385f7f56309b5e53986a1cbd21e2ebfc4a9"} Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.090200 4751 scope.go:117] "RemoveContainer" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.121756 4751 scope.go:117] "RemoveContainer" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.155106 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.166444 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-82fwr"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.167445 4751 scope.go:117] "RemoveContainer" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.199925 4751 scope.go:117] "RemoveContainer" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.200377 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88\": container with ID starting with e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88 not found: ID does not exist" containerID="e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200432 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88"} err="failed to get container status \"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88\": rpc error: code = NotFound desc = could not find container \"e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88\": container with ID starting with e882b991fe554f5b55ac2c47e1792c2cd21238b11811faa8386858dcb855db88 not found: ID does not exist" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200467 4751 scope.go:117] "RemoveContainer" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.200756 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91\": container with ID starting with b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91 not found: ID does not exist" containerID="b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200793 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91"} err="failed to get container status \"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91\": rpc error: code = NotFound desc = could not find container \"b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91\": container with ID starting with b50541ad3e512cc4dea328b45886bb9c17662fb39260a2cbf961ba2a0be6ff91 not found: ID does not exist" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.200819 4751 scope.go:117] "RemoveContainer" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.201081 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701\": container with ID starting with 0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701 not found: ID does not exist" containerID="0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.201119 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701"} err="failed to get container status \"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701\": rpc error: code = NotFound desc = could not find container \"0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701\": container with ID starting with 0d322c5e3834de44a2bc28394d54b57ce1edfd2b2f61ffa2b8da3ccaad796701 not found: ID does not exist" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594206 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5g7x"] Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594834 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594871 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594894 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594909 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594927 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594961 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.594974 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.594991 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595002 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595020 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595033 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595051 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595063 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595082 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595095 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595117 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595133 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595157 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595172 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595193 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595207 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595226 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595238 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595261 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595276 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595295 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595308 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595323 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595365 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595379 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595393 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595414 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595426 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-utilities" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595441 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595453 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: E0130 21:30:25.595467 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595480 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="extract-content" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595696 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595714 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595748 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595761 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="448ce159-6181-433b-a28a-d00b9240b5af" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595783 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595802 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" containerName="marketplace-operator" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.595820 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" containerName="registry-server" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.597850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.600417 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.602989 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5g7x"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.715147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-utilities\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.715472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-catalog-content\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.715653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kc9\" (UniqueName: \"kubernetes.io/projected/b187a442-317c-42c9-ba1a-ff41e0b9bc90-kube-api-access-w7kc9\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.787914 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zps7r"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.789907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.801723 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zps7r"] Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.802700 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-catalog-content\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7kc9\" (UniqueName: \"kubernetes.io/projected/b187a442-317c-42c9-ba1a-ff41e0b9bc90-kube-api-access-w7kc9\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbs7k\" (UniqueName: \"kubernetes.io/projected/fac62ab3-6625-4680-a70b-235f054baa64-kube-api-access-pbs7k\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817538 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-utilities\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-utilities\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.817684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-catalog-content\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.818193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-utilities\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.818271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b187a442-317c-42c9-ba1a-ff41e0b9bc90-catalog-content\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.844777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7kc9\" (UniqueName: \"kubernetes.io/projected/b187a442-317c-42c9-ba1a-ff41e0b9bc90-kube-api-access-w7kc9\") pod \"redhat-marketplace-n5g7x\" (UID: \"b187a442-317c-42c9-ba1a-ff41e0b9bc90\") " pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-utilities\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-catalog-content\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919309 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbs7k\" (UniqueName: \"kubernetes.io/projected/fac62ab3-6625-4680-a70b-235f054baa64-kube-api-access-pbs7k\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.919851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-catalog-content\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.920082 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac62ab3-6625-4680-a70b-235f054baa64-utilities\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.927082 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.940425 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbs7k\" (UniqueName: \"kubernetes.io/projected/fac62ab3-6625-4680-a70b-235f054baa64-kube-api-access-pbs7k\") pod \"redhat-operators-zps7r\" (UID: \"fac62ab3-6625-4680-a70b-235f054baa64\") " pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.985888 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f294d30-8a47-4f81-930d-3c0bbf564a2e" path="/var/lib/kubelet/pods/3f294d30-8a47-4f81-930d-3c0bbf564a2e/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.994144 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448ce159-6181-433b-a28a-d00b9240b5af" path="/var/lib/kubelet/pods/448ce159-6181-433b-a28a-d00b9240b5af/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.994850 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658cd3aa-28fd-4fdd-bbce-ab07effcdc0b" path="/var/lib/kubelet/pods/658cd3aa-28fd-4fdd-bbce-ab07effcdc0b/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.996253 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a24b1f1-0656-41ef-826d-c6c40f96b470" path="/var/lib/kubelet/pods/6a24b1f1-0656-41ef-826d-c6c40f96b470/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.996853 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a791b2a3-aead-4130-bdfa-e219f2d47593" path="/var/lib/kubelet/pods/a791b2a3-aead-4130-bdfa-e219f2d47593/volumes" Jan 30 21:30:25 crc kubenswrapper[4751]: I0130 21:30:25.997564 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac49c6a1-fa74-49f3-ba94-c5a469df4a93" path="/var/lib/kubelet/pods/ac49c6a1-fa74-49f3-ba94-c5a469df4a93/volumes" Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.002039 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcb33b0-97a6-4ded-96b6-1c5bd9053977" path="/var/lib/kubelet/pods/cdcb33b0-97a6-4ded-96b6-1c5bd9053977/volumes" Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.111702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.348482 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5g7x"] Jan 30 21:30:26 crc kubenswrapper[4751]: W0130 21:30:26.352264 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb187a442_317c_42c9_ba1a_ff41e0b9bc90.slice/crio-c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d WatchSource:0}: Error finding container c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d: Status 404 returned error can't find the container with id c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d Jan 30 21:30:26 crc kubenswrapper[4751]: I0130 21:30:26.540480 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zps7r"] Jan 30 21:30:26 crc kubenswrapper[4751]: W0130 21:30:26.605646 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfac62ab3_6625_4680_a70b_235f054baa64.slice/crio-692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2 WatchSource:0}: Error finding container 692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2: Status 404 returned error can't find the container with id 692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2 Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.116369 4751 generic.go:334] "Generic (PLEG): container finished" podID="fac62ab3-6625-4680-a70b-235f054baa64" containerID="54fe6635e439fdc023584a1a5e0a30703481be8132e20d29f58e8655731f3350" exitCode=0 Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.116444 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerDied","Data":"54fe6635e439fdc023584a1a5e0a30703481be8132e20d29f58e8655731f3350"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.116886 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerStarted","Data":"692e0d371b83b0506b343dd1acdb9421c1737e030f4103c895488a224c673bd2"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.119727 4751 generic.go:334] "Generic (PLEG): container finished" podID="b187a442-317c-42c9-ba1a-ff41e0b9bc90" containerID="6fc910528e000147fc6e9e3ad8eb2e6b5c8b82173f5d9a6e1f8d75667e00a7a0" exitCode=0 Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.119759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerDied","Data":"6fc910528e000147fc6e9e3ad8eb2e6b5c8b82173f5d9a6e1f8d75667e00a7a0"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.119780 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerStarted","Data":"c92ffc172810c61f06b70d401c1d320d022431dbc7183c734478e3087ca6419d"} Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.994601 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.996097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.996205 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:27 crc kubenswrapper[4751]: I0130 21:30:27.999108 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.061160 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.061256 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.061289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.130594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerStarted","Data":"faab27cb6633bc44cac4e500c3c4abca1f48a467525fb1ddbfb21c5a35e15305"} Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.133708 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerStarted","Data":"fc85a74dbb30af00d8b2934bf49b3ea553d7c16a81fa37ea788918daa04a73fa"} Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.163492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.163563 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.163586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.164370 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.164644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.187193 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvqpq"] Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.192044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"certified-operators-kcjb7\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.200412 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvqpq"] Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.200511 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.204726 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.264790 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-catalog-content\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.264849 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbd6r\" (UniqueName: \"kubernetes.io/projected/f675e6ee-15d0-4fa7-94ec-c08976e45a20-kube-api-access-zbd6r\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.264915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-utilities\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.335737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.367045 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-catalog-content\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.367163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbd6r\" (UniqueName: \"kubernetes.io/projected/f675e6ee-15d0-4fa7-94ec-c08976e45a20-kube-api-access-zbd6r\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.367351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-utilities\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.368131 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-catalog-content\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.368611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f675e6ee-15d0-4fa7-94ec-c08976e45a20-utilities\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.389426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbd6r\" (UniqueName: \"kubernetes.io/projected/f675e6ee-15d0-4fa7-94ec-c08976e45a20-kube-api-access-zbd6r\") pod \"community-operators-qvqpq\" (UID: \"f675e6ee-15d0-4fa7-94ec-c08976e45a20\") " pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.532629 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.817042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 21:30:28 crc kubenswrapper[4751]: W0130 21:30:28.825632 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0c60341_0c3e_4be5_a2a2_e5a4ed9b5776.slice/crio-95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4 WatchSource:0}: Error finding container 95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4: Status 404 returned error can't find the container with id 95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4 Jan 30 21:30:28 crc kubenswrapper[4751]: I0130 21:30:28.974076 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvqpq"] Jan 30 21:30:28 crc kubenswrapper[4751]: W0130 21:30:28.979534 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf675e6ee_15d0_4fa7_94ec_c08976e45a20.slice/crio-8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4 WatchSource:0}: Error finding container 8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4: Status 404 returned error can't find the container with id 8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.145264 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerID="f0ff7f17884024cadb59819e4114f64f13e4c4199dcbe665c88b3d9400eb196b" exitCode=0 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.145365 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"f0ff7f17884024cadb59819e4114f64f13e4c4199dcbe665c88b3d9400eb196b"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.146437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerStarted","Data":"95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.152238 4751 generic.go:334] "Generic (PLEG): container finished" podID="fac62ab3-6625-4680-a70b-235f054baa64" containerID="faab27cb6633bc44cac4e500c3c4abca1f48a467525fb1ddbfb21c5a35e15305" exitCode=0 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.153239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerDied","Data":"faab27cb6633bc44cac4e500c3c4abca1f48a467525fb1ddbfb21c5a35e15305"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.156622 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerStarted","Data":"8e90208d71032a7d08484761f34efbdfd42b0aa3c41a12f2d7f0b42eda0edff4"} Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.163164 4751 generic.go:334] "Generic (PLEG): container finished" podID="b187a442-317c-42c9-ba1a-ff41e0b9bc90" containerID="fc85a74dbb30af00d8b2934bf49b3ea553d7c16a81fa37ea788918daa04a73fa" exitCode=0 Jan 30 21:30:29 crc kubenswrapper[4751]: I0130 21:30:29.163215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerDied","Data":"fc85a74dbb30af00d8b2934bf49b3ea553d7c16a81fa37ea788918daa04a73fa"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.177982 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerID="b9102fc49cd164d867074d03c63d8593be70d6d663c1f645db5a7cf70fe3ec65" exitCode=0 Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.178082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"b9102fc49cd164d867074d03c63d8593be70d6d663c1f645db5a7cf70fe3ec65"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.183187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zps7r" event={"ID":"fac62ab3-6625-4680-a70b-235f054baa64","Type":"ContainerStarted","Data":"10cc6a49d8f6ffeef2e7fac35d00495ce9eb4ba01ab3857583f55707658823c3"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.189009 4751 generic.go:334] "Generic (PLEG): container finished" podID="f675e6ee-15d0-4fa7-94ec-c08976e45a20" containerID="28b79fec40c55c7e7f50dba6771bbbaaa9daf29c3b34bea51e8235c501364afe" exitCode=0 Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.189091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerDied","Data":"28b79fec40c55c7e7f50dba6771bbbaaa9daf29c3b34bea51e8235c501364afe"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.192200 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5g7x" event={"ID":"b187a442-317c-42c9-ba1a-ff41e0b9bc90","Type":"ContainerStarted","Data":"f59145b3135d1eddb1361429c0bb16e51dba7bb7af33f696367ab8138509f419"} Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.260392 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zps7r" podStartSLOduration=2.780505939 podStartE2EDuration="5.260320341s" podCreationTimestamp="2026-01-30 21:30:25 +0000 UTC" firstStartedPulling="2026-01-30 21:30:27.120186165 +0000 UTC m=+965.866008814" lastFinishedPulling="2026-01-30 21:30:29.600000557 +0000 UTC m=+968.345823216" observedRunningTime="2026-01-30 21:30:30.253423857 +0000 UTC m=+968.999246516" watchObservedRunningTime="2026-01-30 21:30:30.260320341 +0000 UTC m=+969.006142990" Jan 30 21:30:30 crc kubenswrapper[4751]: I0130 21:30:30.273215 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5g7x" podStartSLOduration=2.774970921 podStartE2EDuration="5.273199277s" podCreationTimestamp="2026-01-30 21:30:25 +0000 UTC" firstStartedPulling="2026-01-30 21:30:27.121702365 +0000 UTC m=+965.867525014" lastFinishedPulling="2026-01-30 21:30:29.619930701 +0000 UTC m=+968.365753370" observedRunningTime="2026-01-30 21:30:30.269088097 +0000 UTC m=+969.014910756" watchObservedRunningTime="2026-01-30 21:30:30.273199277 +0000 UTC m=+969.019021926" Jan 30 21:30:31 crc kubenswrapper[4751]: I0130 21:30:31.211609 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerStarted","Data":"5483ad83271f18d69e048ebc2aee6a4fed47d89d32c14da67667286931d2f980"} Jan 30 21:30:31 crc kubenswrapper[4751]: I0130 21:30:31.214442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerStarted","Data":"866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac"} Jan 30 21:30:31 crc kubenswrapper[4751]: I0130 21:30:31.260230 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kcjb7" podStartSLOduration=2.82838395 podStartE2EDuration="4.260209981s" podCreationTimestamp="2026-01-30 21:30:27 +0000 UTC" firstStartedPulling="2026-01-30 21:30:29.150047992 +0000 UTC m=+967.895870641" lastFinishedPulling="2026-01-30 21:30:30.581874013 +0000 UTC m=+969.327696672" observedRunningTime="2026-01-30 21:30:31.258994869 +0000 UTC m=+970.004817528" watchObservedRunningTime="2026-01-30 21:30:31.260209981 +0000 UTC m=+970.006032640" Jan 30 21:30:32 crc kubenswrapper[4751]: I0130 21:30:32.225877 4751 generic.go:334] "Generic (PLEG): container finished" podID="f675e6ee-15d0-4fa7-94ec-c08976e45a20" containerID="5483ad83271f18d69e048ebc2aee6a4fed47d89d32c14da67667286931d2f980" exitCode=0 Jan 30 21:30:32 crc kubenswrapper[4751]: I0130 21:30:32.226012 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerDied","Data":"5483ad83271f18d69e048ebc2aee6a4fed47d89d32c14da67667286931d2f980"} Jan 30 21:30:33 crc kubenswrapper[4751]: I0130 21:30:33.235531 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvqpq" event={"ID":"f675e6ee-15d0-4fa7-94ec-c08976e45a20","Type":"ContainerStarted","Data":"8db2f5da25579fcf378ad90f3244544b9833c335b30adddc132277b2aa70b810"} Jan 30 21:30:33 crc kubenswrapper[4751]: I0130 21:30:33.266678 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvqpq" podStartSLOduration=2.859830926 podStartE2EDuration="5.266659351s" podCreationTimestamp="2026-01-30 21:30:28 +0000 UTC" firstStartedPulling="2026-01-30 21:30:30.194602049 +0000 UTC m=+968.940424698" lastFinishedPulling="2026-01-30 21:30:32.601430474 +0000 UTC m=+971.347253123" observedRunningTime="2026-01-30 21:30:33.260805274 +0000 UTC m=+972.006627933" watchObservedRunningTime="2026-01-30 21:30:33.266659351 +0000 UTC m=+972.012482010" Jan 30 21:30:35 crc kubenswrapper[4751]: I0130 21:30:35.928048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:35 crc kubenswrapper[4751]: I0130 21:30:35.928382 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:35 crc kubenswrapper[4751]: I0130 21:30:35.997613 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:36 crc kubenswrapper[4751]: I0130 21:30:36.112196 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:36 crc kubenswrapper[4751]: I0130 21:30:36.112268 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:36 crc kubenswrapper[4751]: I0130 21:30:36.339543 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5g7x" Jan 30 21:30:37 crc kubenswrapper[4751]: I0130 21:30:37.192350 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zps7r" podUID="fac62ab3-6625-4680-a70b-235f054baa64" containerName="registry-server" probeResult="failure" output=< Jan 30 21:30:37 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:30:37 crc kubenswrapper[4751]: > Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.336346 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.336427 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.429843 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.533206 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.533266 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:38 crc kubenswrapper[4751]: I0130 21:30:38.589268 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:39 crc kubenswrapper[4751]: I0130 21:30:39.338698 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 21:30:39 crc kubenswrapper[4751]: I0130 21:30:39.373040 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvqpq" Jan 30 21:30:46 crc kubenswrapper[4751]: I0130 21:30:46.183283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:46 crc kubenswrapper[4751]: I0130 21:30:46.233810 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zps7r" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.052727 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh"] Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.054601 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.057131 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.074700 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh"] Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.149519 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.149605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.149676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251058 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251214 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251261 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.251925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.272888 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.424584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:30:58 crc kubenswrapper[4751]: I0130 21:30:58.938657 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh"] Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.627276 4751 generic.go:334] "Generic (PLEG): container finished" podID="eac36070-4c04-460f-bfbb-e77659bad07e" containerID="fe731b5cf787abbf126d827b1bca7991122721d525a7695c05666bcacf428912" exitCode=0 Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.627342 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"fe731b5cf787abbf126d827b1bca7991122721d525a7695c05666bcacf428912"} Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.627373 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerStarted","Data":"e6eca3e065969b7e901287ac9ec8650a7e9ad46bb1c8233e59e733732aaf56e6"} Jan 30 21:30:59 crc kubenswrapper[4751]: I0130 21:30:59.629265 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:31:01 crc kubenswrapper[4751]: I0130 21:31:01.655741 4751 generic.go:334] "Generic (PLEG): container finished" podID="eac36070-4c04-460f-bfbb-e77659bad07e" containerID="16efb573e2aed225b5226c4007411ef3aa051dbd38a6bf1e12978c5bc781d705" exitCode=0 Jan 30 21:31:01 crc kubenswrapper[4751]: I0130 21:31:01.655793 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"16efb573e2aed225b5226c4007411ef3aa051dbd38a6bf1e12978c5bc781d705"} Jan 30 21:31:02 crc kubenswrapper[4751]: I0130 21:31:02.672564 4751 generic.go:334] "Generic (PLEG): container finished" podID="eac36070-4c04-460f-bfbb-e77659bad07e" containerID="f41e2a972da4eebbe257cb3cf8c41d744879120c462a9471dc47984afeb89ac5" exitCode=0 Jan 30 21:31:02 crc kubenswrapper[4751]: I0130 21:31:02.672929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"f41e2a972da4eebbe257cb3cf8c41d744879120c462a9471dc47984afeb89ac5"} Jan 30 21:31:03 crc kubenswrapper[4751]: I0130 21:31:03.994058 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.149855 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") pod \"eac36070-4c04-460f-bfbb-e77659bad07e\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.149916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") pod \"eac36070-4c04-460f-bfbb-e77659bad07e\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.150094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") pod \"eac36070-4c04-460f-bfbb-e77659bad07e\" (UID: \"eac36070-4c04-460f-bfbb-e77659bad07e\") " Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.150842 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle" (OuterVolumeSpecName: "bundle") pod "eac36070-4c04-460f-bfbb-e77659bad07e" (UID: "eac36070-4c04-460f-bfbb-e77659bad07e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.157557 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p" (OuterVolumeSpecName: "kube-api-access-ph68p") pod "eac36070-4c04-460f-bfbb-e77659bad07e" (UID: "eac36070-4c04-460f-bfbb-e77659bad07e"). InnerVolumeSpecName "kube-api-access-ph68p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.163560 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util" (OuterVolumeSpecName: "util") pod "eac36070-4c04-460f-bfbb-e77659bad07e" (UID: "eac36070-4c04-460f-bfbb-e77659bad07e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.252232 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph68p\" (UniqueName: \"kubernetes.io/projected/eac36070-4c04-460f-bfbb-e77659bad07e-kube-api-access-ph68p\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.252656 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.252786 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac36070-4c04-460f-bfbb-e77659bad07e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.693664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" event={"ID":"eac36070-4c04-460f-bfbb-e77659bad07e","Type":"ContainerDied","Data":"e6eca3e065969b7e901287ac9ec8650a7e9ad46bb1c8233e59e733732aaf56e6"} Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.693713 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6eca3e065969b7e901287ac9ec8650a7e9ad46bb1c8233e59e733732aaf56e6" Jan 30 21:31:04 crc kubenswrapper[4751]: I0130 21:31:04.693777 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.044260 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-k49vc"] Jan 30 21:31:08 crc kubenswrapper[4751]: E0130 21:31:08.045282 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="extract" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045304 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="extract" Jan 30 21:31:08 crc kubenswrapper[4751]: E0130 21:31:08.045576 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="pull" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045595 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="pull" Jan 30 21:31:08 crc kubenswrapper[4751]: E0130 21:31:08.045631 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="util" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045644 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="util" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.045940 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac36070-4c04-460f-bfbb-e77659bad07e" containerName="extract" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.046847 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.048916 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pkvv8" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.049407 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.051798 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.059212 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-k49vc"] Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.214265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhc6\" (UniqueName: \"kubernetes.io/projected/c9f603b5-de3a-4d5e-acc1-6da32a99dcaa-kube-api-access-5hhc6\") pod \"nmstate-operator-646758c888-k49vc\" (UID: \"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa\") " pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.315543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hhc6\" (UniqueName: \"kubernetes.io/projected/c9f603b5-de3a-4d5e-acc1-6da32a99dcaa-kube-api-access-5hhc6\") pod \"nmstate-operator-646758c888-k49vc\" (UID: \"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa\") " pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.346146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hhc6\" (UniqueName: \"kubernetes.io/projected/c9f603b5-de3a-4d5e-acc1-6da32a99dcaa-kube-api-access-5hhc6\") pod \"nmstate-operator-646758c888-k49vc\" (UID: \"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa\") " pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.373405 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" Jan 30 21:31:08 crc kubenswrapper[4751]: I0130 21:31:08.802922 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-k49vc"] Jan 30 21:31:09 crc kubenswrapper[4751]: I0130 21:31:09.745592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" event={"ID":"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa","Type":"ContainerStarted","Data":"71ecaefdf49db2d2399b3d8f99497f479242692b47863a618ecfc9abc36a48fc"} Jan 30 21:31:11 crc kubenswrapper[4751]: I0130 21:31:11.762517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" event={"ID":"c9f603b5-de3a-4d5e-acc1-6da32a99dcaa","Type":"ContainerStarted","Data":"3966b75bb810a7c8f152fabb814d68de8b3f83c8fd1b635840ddabe1f21acc26"} Jan 30 21:31:11 crc kubenswrapper[4751]: I0130 21:31:11.791834 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-k49vc" podStartSLOduration=1.774525896 podStartE2EDuration="3.791813986s" podCreationTimestamp="2026-01-30 21:31:08 +0000 UTC" firstStartedPulling="2026-01-30 21:31:08.819423337 +0000 UTC m=+1007.565245976" lastFinishedPulling="2026-01-30 21:31:10.836711417 +0000 UTC m=+1009.582534066" observedRunningTime="2026-01-30 21:31:11.784591262 +0000 UTC m=+1010.530413921" watchObservedRunningTime="2026-01-30 21:31:11.791813986 +0000 UTC m=+1010.537636635" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.638045 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rfrtx"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.640080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.644511 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-llzl8" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.646664 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.647518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.659807 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.665440 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rfrtx"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.683173 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-d95cp"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.684129 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.687685 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.719281 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4c5\" (UniqueName: \"kubernetes.io/projected/be191f8d-d8ce-4f29-95f1-1278c108ca11-kube-api-access-cv4c5\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.719319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7fdh\" (UniqueName: \"kubernetes.io/projected/f0ccd951-df7f-452f-b340-64fa7c9f9916-kube-api-access-p7fdh\") pod \"nmstate-metrics-54757c584b-rfrtx\" (UID: \"f0ccd951-df7f-452f-b340-64fa7c9f9916\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.719375 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.796361 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.797194 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.799896 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.800802 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wglvk" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.806792 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820159 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz"] Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820466 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-dbus-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4c5\" (UniqueName: \"kubernetes.io/projected/be191f8d-d8ce-4f29-95f1-1278c108ca11-kube-api-access-cv4c5\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820527 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7fdh\" (UniqueName: \"kubernetes.io/projected/f0ccd951-df7f-452f-b340-64fa7c9f9916-kube-api-access-p7fdh\") pod \"nmstate-metrics-54757c584b-rfrtx\" (UID: \"f0ccd951-df7f-452f-b340-64fa7c9f9916\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdghn\" (UniqueName: \"kubernetes.io/projected/eea5deed-9d07-45b2-b400-64b7c2336994-kube-api-access-gdghn\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820671 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-ovs-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.820686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-nmstate-lock\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: E0130 21:31:18.821537 4751 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 21:31:18 crc kubenswrapper[4751]: E0130 21:31:18.821661 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair podName:be191f8d-d8ce-4f29-95f1-1278c108ca11 nodeName:}" failed. No retries permitted until 2026-01-30 21:31:19.321643917 +0000 UTC m=+1018.067466566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-7hqmv" (UID: "be191f8d-d8ce-4f29-95f1-1278c108ca11") : secret "openshift-nmstate-webhook" not found Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.849795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4c5\" (UniqueName: \"kubernetes.io/projected/be191f8d-d8ce-4f29-95f1-1278c108ca11-kube-api-access-cv4c5\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.855793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7fdh\" (UniqueName: \"kubernetes.io/projected/f0ccd951-df7f-452f-b340-64fa7c9f9916-kube-api-access-p7fdh\") pod \"nmstate-metrics-54757c584b-rfrtx\" (UID: \"f0ccd951-df7f-452f-b340-64fa7c9f9916\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.922606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2806dd41-f23b-466a-a187-4689685f6b86-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.922927 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-ovs-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923072 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-ovs-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923180 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-nmstate-lock\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-nmstate-lock\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.923917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-dbus-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924118 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdghn\" (UniqueName: \"kubernetes.io/projected/eea5deed-9d07-45b2-b400-64b7c2336994-kube-api-access-gdghn\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdcz\" (UniqueName: \"kubernetes.io/projected/2806dd41-f23b-466a-a187-4689685f6b86-kube-api-access-9zdcz\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924389 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2806dd41-f23b-466a-a187-4689685f6b86-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.924699 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea5deed-9d07-45b2-b400-64b7c2336994-dbus-socket\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.940676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdghn\" (UniqueName: \"kubernetes.io/projected/eea5deed-9d07-45b2-b400-64b7c2336994-kube-api-access-gdghn\") pod \"nmstate-handler-d95cp\" (UID: \"eea5deed-9d07-45b2-b400-64b7c2336994\") " pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:18 crc kubenswrapper[4751]: I0130 21:31:18.960394 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.006238 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.006523 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.007136 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.025496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zdcz\" (UniqueName: \"kubernetes.io/projected/2806dd41-f23b-466a-a187-4689685f6b86-kube-api-access-9zdcz\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.025778 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2806dd41-f23b-466a-a187-4689685f6b86-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.025957 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2806dd41-f23b-466a-a187-4689685f6b86-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.026356 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.026582 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2806dd41-f23b-466a-a187-4689685f6b86-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.030304 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2806dd41-f23b-466a-a187-4689685f6b86-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.059824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zdcz\" (UniqueName: \"kubernetes.io/projected/2806dd41-f23b-466a-a187-4689685f6b86-kube-api-access-9zdcz\") pod \"nmstate-console-plugin-7754f76f8b-kxkfz\" (UID: \"2806dd41-f23b-466a-a187-4689685f6b86\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.112740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128703 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128749 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.128990 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.129055 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.129085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232820 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.232963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.233000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.233057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.234185 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.234452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.234674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.235590 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.237910 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.239840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.252885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"console-6b64b75d5d-kgc46\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.334943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.339712 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.339923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/be191f8d-d8ce-4f29-95f1-1278c108ca11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7hqmv\" (UID: \"be191f8d-d8ce-4f29-95f1-1278c108ca11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.431490 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rfrtx"] Jan 30 21:31:19 crc kubenswrapper[4751]: W0130 21:31:19.455884 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0ccd951_df7f_452f_b340_64fa7c9f9916.slice/crio-64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59 WatchSource:0}: Error finding container 64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59: Status 404 returned error can't find the container with id 64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59 Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.567455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.643566 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz"] Jan 30 21:31:19 crc kubenswrapper[4751]: W0130 21:31:19.652035 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2806dd41_f23b_466a_a187_4689685f6b86.slice/crio-bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff WatchSource:0}: Error finding container bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff: Status 404 returned error can't find the container with id bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.753308 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.843269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerStarted","Data":"940073f1b9050f0a93c1aa8e842c9477fdedfec5ed669f60a6eea0cf2c8dd11a"} Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.846733 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d95cp" event={"ID":"eea5deed-9d07-45b2-b400-64b7c2336994","Type":"ContainerStarted","Data":"a10696e35c41b57c15b609887b59ba11d6d803ac7552b8134fddbadad31c7e30"} Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.847912 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" event={"ID":"2806dd41-f23b-466a-a187-4689685f6b86","Type":"ContainerStarted","Data":"bee7bf0ae28ea8c607b3929519702ece404ce344fd7ff81c781f13045349b7ff"} Jan 30 21:31:19 crc kubenswrapper[4751]: I0130 21:31:19.853946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" event={"ID":"f0ccd951-df7f-452f-b340-64fa7c9f9916","Type":"ContainerStarted","Data":"64e250cbfa65ab4a294b231d61ec3fc1303a35e92b6965b0a33549a8ef025c59"} Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.038226 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv"] Jan 30 21:31:20 crc kubenswrapper[4751]: W0130 21:31:20.040550 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe191f8d_d8ce_4f29_95f1_1278c108ca11.slice/crio-da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3 WatchSource:0}: Error finding container da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3: Status 404 returned error can't find the container with id da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3 Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.865130 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerStarted","Data":"bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194"} Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.866791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" event={"ID":"be191f8d-d8ce-4f29-95f1-1278c108ca11","Type":"ContainerStarted","Data":"da9814746878bea05aa4d008ade7ae0c0b6f70753c00d02f67e5f599e8f348b3"} Jan 30 21:31:20 crc kubenswrapper[4751]: I0130 21:31:20.888957 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b64b75d5d-kgc46" podStartSLOduration=2.888935257 podStartE2EDuration="2.888935257s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:31:20.885232658 +0000 UTC m=+1019.631055317" watchObservedRunningTime="2026-01-30 21:31:20.888935257 +0000 UTC m=+1019.634757896" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.881396 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" event={"ID":"f0ccd951-df7f-452f-b340-64fa7c9f9916","Type":"ContainerStarted","Data":"a8b90bd4589936ff872d3f0c8eb3fb9ad768ef6fa487c15757ae24d0f92b3401"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.883687 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" event={"ID":"be191f8d-d8ce-4f29-95f1-1278c108ca11","Type":"ContainerStarted","Data":"139d0b727b646bf2eb0a5bd5e8e1a92c889f740b95a64a90618c0cddc29c023c"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.884059 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.897209 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" event={"ID":"2806dd41-f23b-466a-a187-4689685f6b86","Type":"ContainerStarted","Data":"b8b565c0f9ba4f8e1679a418d5449bc10cfc0a4320215c6463b168f0d1b9f2a1"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.902721 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-d95cp" event={"ID":"eea5deed-9d07-45b2-b400-64b7c2336994","Type":"ContainerStarted","Data":"dbf6efa995485c6f8a2054af59b9e5ad99090e7eb3e5da670b4938812eac18b9"} Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.908878 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.914214 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" podStartSLOduration=2.7968473879999998 podStartE2EDuration="4.913734178s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:20.043223411 +0000 UTC m=+1018.789046060" lastFinishedPulling="2026-01-30 21:31:22.160110201 +0000 UTC m=+1020.905932850" observedRunningTime="2026-01-30 21:31:22.907048259 +0000 UTC m=+1021.652870948" watchObservedRunningTime="2026-01-30 21:31:22.913734178 +0000 UTC m=+1021.659556837" Jan 30 21:31:22 crc kubenswrapper[4751]: I0130 21:31:22.966777 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-d95cp" podStartSLOduration=1.890659281 podStartE2EDuration="4.96675609s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:19.084146256 +0000 UTC m=+1017.829968905" lastFinishedPulling="2026-01-30 21:31:22.160243065 +0000 UTC m=+1020.906065714" observedRunningTime="2026-01-30 21:31:22.932411759 +0000 UTC m=+1021.678234418" watchObservedRunningTime="2026-01-30 21:31:22.96675609 +0000 UTC m=+1021.712578769" Jan 30 21:31:25 crc kubenswrapper[4751]: I0130 21:31:25.935506 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" event={"ID":"f0ccd951-df7f-452f-b340-64fa7c9f9916","Type":"ContainerStarted","Data":"ddb5f2b17b1acf96bf91aa7f2273f7c962f7ef1449c896f5b33eac87147bc4a7"} Jan 30 21:31:25 crc kubenswrapper[4751]: I0130 21:31:25.964999 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rfrtx" podStartSLOduration=2.445839477 podStartE2EDuration="7.964975511s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:19.457717732 +0000 UTC m=+1018.203540381" lastFinishedPulling="2026-01-30 21:31:24.976853766 +0000 UTC m=+1023.722676415" observedRunningTime="2026-01-30 21:31:25.961064087 +0000 UTC m=+1024.706886766" watchObservedRunningTime="2026-01-30 21:31:25.964975511 +0000 UTC m=+1024.710798170" Jan 30 21:31:25 crc kubenswrapper[4751]: I0130 21:31:25.965284 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kxkfz" podStartSLOduration=5.463725275 podStartE2EDuration="7.965274039s" podCreationTimestamp="2026-01-30 21:31:18 +0000 UTC" firstStartedPulling="2026-01-30 21:31:19.659884752 +0000 UTC m=+1018.405707401" lastFinishedPulling="2026-01-30 21:31:22.161433506 +0000 UTC m=+1020.907256165" observedRunningTime="2026-01-30 21:31:22.962073784 +0000 UTC m=+1021.707896493" watchObservedRunningTime="2026-01-30 21:31:25.965274039 +0000 UTC m=+1024.711096698" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.037108 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-d95cp" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.340400 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.340491 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.347726 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:29 crc kubenswrapper[4751]: I0130 21:31:29.991833 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:31:30 crc kubenswrapper[4751]: I0130 21:31:30.071253 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:31:39 crc kubenswrapper[4751]: I0130 21:31:39.909155 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7hqmv" Jan 30 21:31:54 crc kubenswrapper[4751]: I0130 21:31:54.126604 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:31:54 crc kubenswrapper[4751]: I0130 21:31:54.127060 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.137437 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-66d88878c9-plgvh" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" containerID="cri-o://b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" gracePeriod=15 Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.291889 4751 patch_prober.go:28] interesting pod/console-66d88878c9-plgvh container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.78:8443/health\": dial tcp 10.217.0.78:8443: connect: connection refused" start-of-body= Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.291945 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-66d88878c9-plgvh" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.78:8443/health\": dial tcp 10.217.0.78:8443: connect: connection refused" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.954684 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66d88878c9-plgvh_6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c/console/0.log" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.955275 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981332 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981433 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981505 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981541 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981575 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.981626 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") pod \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\" (UID: \"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c\") " Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.982777 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config" (OuterVolumeSpecName: "console-config") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.982871 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.983105 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca" (OuterVolumeSpecName: "service-ca") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.983301 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.989641 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.989404 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp" (OuterVolumeSpecName: "kube-api-access-pkqrp") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "kube-api-access-pkqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:31:55 crc kubenswrapper[4751]: I0130 21:31:55.990215 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" (UID: "6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083485 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083511 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083520 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkqrp\" (UniqueName: \"kubernetes.io/projected/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-kube-api-access-pkqrp\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083530 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083546 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083554 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.083562 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225234 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66d88878c9-plgvh_6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c/console/0.log" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225281 4751 generic.go:334] "Generic (PLEG): container finished" podID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" exitCode=2 Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225305 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerDied","Data":"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18"} Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225347 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66d88878c9-plgvh" event={"ID":"6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c","Type":"ContainerDied","Data":"0e693c5eb441ca00dae4f66390c3ffc4f2ac93c82973ea2489bbd8ae4743393e"} Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225363 4751 scope.go:117] "RemoveContainer" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.225371 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66d88878c9-plgvh" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.243497 4751 scope.go:117] "RemoveContainer" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" Jan 30 21:31:56 crc kubenswrapper[4751]: E0130 21:31:56.243880 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18\": container with ID starting with b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18 not found: ID does not exist" containerID="b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.243909 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18"} err="failed to get container status \"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18\": rpc error: code = NotFound desc = could not find container \"b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18\": container with ID starting with b8da68d24d398052e970f15800d2f73b793cd125bb65462cf36be24e202afe18 not found: ID does not exist" Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.267505 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:31:56 crc kubenswrapper[4751]: I0130 21:31:56.277167 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66d88878c9-plgvh"] Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.839803 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw"] Jan 30 21:31:57 crc kubenswrapper[4751]: E0130 21:31:57.840373 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.840387 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.840574 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" containerName="console" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.841993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.843475 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.853785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw"] Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.910897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.910983 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.911078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:57 crc kubenswrapper[4751]: I0130 21:31:57.985044 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c" path="/var/lib/kubelet/pods/6cc0abec-01cf-49e2-bae8-0bbc07fd1d7c/volumes" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.012238 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.012384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.012440 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.013015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.014732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.030891 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.158865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:31:58 crc kubenswrapper[4751]: I0130 21:31:58.637164 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw"] Jan 30 21:31:59 crc kubenswrapper[4751]: I0130 21:31:59.258824 4751 generic.go:334] "Generic (PLEG): container finished" podID="00263593-80af-4a40-a2c4-538f582434c4" containerID="015e49363f1ef00af3377dd3fbdb14f07dac52e78791d5185c7de6fd9d1315a5" exitCode=0 Jan 30 21:31:59 crc kubenswrapper[4751]: I0130 21:31:59.258989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"015e49363f1ef00af3377dd3fbdb14f07dac52e78791d5185c7de6fd9d1315a5"} Jan 30 21:31:59 crc kubenswrapper[4751]: I0130 21:31:59.259315 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerStarted","Data":"75dd171c1aa81e591965bde29d70499232510555a67b3af35ad52ef7f57f165f"} Jan 30 21:32:01 crc kubenswrapper[4751]: I0130 21:32:01.278374 4751 generic.go:334] "Generic (PLEG): container finished" podID="00263593-80af-4a40-a2c4-538f582434c4" containerID="019d6769c771749968382795035cedd534fac5774d0f2cfe6375c3f286f46059" exitCode=0 Jan 30 21:32:01 crc kubenswrapper[4751]: I0130 21:32:01.278480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"019d6769c771749968382795035cedd534fac5774d0f2cfe6375c3f286f46059"} Jan 30 21:32:02 crc kubenswrapper[4751]: I0130 21:32:02.295281 4751 generic.go:334] "Generic (PLEG): container finished" podID="00263593-80af-4a40-a2c4-538f582434c4" containerID="45170160e5f46812feb877131837b613df8254eca544bf8b4de018c57971a777" exitCode=0 Jan 30 21:32:02 crc kubenswrapper[4751]: I0130 21:32:02.295373 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"45170160e5f46812feb877131837b613df8254eca544bf8b4de018c57971a777"} Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.656177 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.712452 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") pod \"00263593-80af-4a40-a2c4-538f582434c4\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.712574 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") pod \"00263593-80af-4a40-a2c4-538f582434c4\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.712661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") pod \"00263593-80af-4a40-a2c4-538f582434c4\" (UID: \"00263593-80af-4a40-a2c4-538f582434c4\") " Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.714269 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle" (OuterVolumeSpecName: "bundle") pod "00263593-80af-4a40-a2c4-538f582434c4" (UID: "00263593-80af-4a40-a2c4-538f582434c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.718530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp" (OuterVolumeSpecName: "kube-api-access-tvfrp") pod "00263593-80af-4a40-a2c4-538f582434c4" (UID: "00263593-80af-4a40-a2c4-538f582434c4"). InnerVolumeSpecName "kube-api-access-tvfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.728689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util" (OuterVolumeSpecName: "util") pod "00263593-80af-4a40-a2c4-538f582434c4" (UID: "00263593-80af-4a40-a2c4-538f582434c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.814245 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.814354 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00263593-80af-4a40-a2c4-538f582434c4-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:32:03 crc kubenswrapper[4751]: I0130 21:32:03.814375 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvfrp\" (UniqueName: \"kubernetes.io/projected/00263593-80af-4a40-a2c4-538f582434c4-kube-api-access-tvfrp\") on node \"crc\" DevicePath \"\"" Jan 30 21:32:04 crc kubenswrapper[4751]: I0130 21:32:04.317760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" event={"ID":"00263593-80af-4a40-a2c4-538f582434c4","Type":"ContainerDied","Data":"75dd171c1aa81e591965bde29d70499232510555a67b3af35ad52ef7f57f165f"} Jan 30 21:32:04 crc kubenswrapper[4751]: I0130 21:32:04.317796 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75dd171c1aa81e591965bde29d70499232510555a67b3af35ad52ef7f57f165f" Jan 30 21:32:04 crc kubenswrapper[4751]: I0130 21:32:04.317852 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842224 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4"] Jan 30 21:32:12 crc kubenswrapper[4751]: E0130 21:32:12.842886 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="pull" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842898 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="pull" Jan 30 21:32:12 crc kubenswrapper[4751]: E0130 21:32:12.842910 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="util" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842916 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="util" Jan 30 21:32:12 crc kubenswrapper[4751]: E0130 21:32:12.842938 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="extract" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.842944 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="extract" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.843071 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="00263593-80af-4a40-a2c4-538f582434c4" containerName="extract" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.843572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845227 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845422 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845427 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.845605 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.847617 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5wbs4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.860897 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4"] Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.910345 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5fd\" (UniqueName: \"kubernetes.io/projected/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-kube-api-access-bc5fd\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.910418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-apiservice-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:12 crc kubenswrapper[4751]: I0130 21:32:12.910466 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-webhook-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.012314 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-apiservice-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.012438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-webhook-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.012488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5fd\" (UniqueName: \"kubernetes.io/projected/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-kube-api-access-bc5fd\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.019813 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-webhook-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.019851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-apiservice-cert\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.030084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5fd\" (UniqueName: \"kubernetes.io/projected/088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a-kube-api-access-bc5fd\") pod \"metallb-operator-controller-manager-6697664f96-w8tr4\" (UID: \"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a\") " pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.158278 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.175133 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-597477f4b5-q868h"] Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.176190 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.178371 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.178544 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5tqht" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.179824 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.190548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-597477f4b5-q868h"] Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.317576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24d2\" (UniqueName: \"kubernetes.io/projected/61545af5-1133-4922-a477-9155212b642c-kube-api-access-q24d2\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.317622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-webhook-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.317698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-apiservice-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.418783 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-apiservice-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.419720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24d2\" (UniqueName: \"kubernetes.io/projected/61545af5-1133-4922-a477-9155212b642c-kube-api-access-q24d2\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.419749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-webhook-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.425112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-apiservice-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.439016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61545af5-1133-4922-a477-9155212b642c-webhook-cert\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.447646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24d2\" (UniqueName: \"kubernetes.io/projected/61545af5-1133-4922-a477-9155212b642c-kube-api-access-q24d2\") pod \"metallb-operator-webhook-server-597477f4b5-q868h\" (UID: \"61545af5-1133-4922-a477-9155212b642c\") " pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.537344 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.591894 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4"] Jan 30 21:32:13 crc kubenswrapper[4751]: W0130 21:32:13.603345 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088ac2b9_a8fd_4aa9_854d_a62a9ecd5e9a.slice/crio-997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81 WatchSource:0}: Error finding container 997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81: Status 404 returned error can't find the container with id 997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81 Jan 30 21:32:13 crc kubenswrapper[4751]: W0130 21:32:13.983005 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61545af5_1133_4922_a477_9155212b642c.slice/crio-d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887 WatchSource:0}: Error finding container d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887: Status 404 returned error can't find the container with id d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887 Jan 30 21:32:13 crc kubenswrapper[4751]: I0130 21:32:13.998283 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-597477f4b5-q868h"] Jan 30 21:32:14 crc kubenswrapper[4751]: I0130 21:32:14.403662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" event={"ID":"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a","Type":"ContainerStarted","Data":"997b9ba7dfef1ba0677237f5fe11e44c94ac60aceb28edfd5b3edd8254ea7f81"} Jan 30 21:32:14 crc kubenswrapper[4751]: I0130 21:32:14.405404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" event={"ID":"61545af5-1133-4922-a477-9155212b642c","Type":"ContainerStarted","Data":"d4dc3103cb7694a11d1555e5af5a7e0d06278011634b90d902da353d081b7887"} Jan 30 21:32:17 crc kubenswrapper[4751]: I0130 21:32:17.426031 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" event={"ID":"088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a","Type":"ContainerStarted","Data":"cf5488430e6bdbd56699cb2651d484f8e9cd245fb98da74f5eaebfbef4021e83"} Jan 30 21:32:17 crc kubenswrapper[4751]: I0130 21:32:17.426487 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:17 crc kubenswrapper[4751]: I0130 21:32:17.446792 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" podStartSLOduration=2.4978778569999998 podStartE2EDuration="5.446771788s" podCreationTimestamp="2026-01-30 21:32:12 +0000 UTC" firstStartedPulling="2026-01-30 21:32:13.607963874 +0000 UTC m=+1072.353786523" lastFinishedPulling="2026-01-30 21:32:16.556857805 +0000 UTC m=+1075.302680454" observedRunningTime="2026-01-30 21:32:17.445515105 +0000 UTC m=+1076.191337794" watchObservedRunningTime="2026-01-30 21:32:17.446771788 +0000 UTC m=+1076.192594437" Jan 30 21:32:19 crc kubenswrapper[4751]: I0130 21:32:19.444390 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" event={"ID":"61545af5-1133-4922-a477-9155212b642c","Type":"ContainerStarted","Data":"d3ab546e5b4a99040f005e03e377e15b9982e88bb5c9536e1304ec50c8087d9c"} Jan 30 21:32:19 crc kubenswrapper[4751]: I0130 21:32:19.445476 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:19 crc kubenswrapper[4751]: I0130 21:32:19.460658 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" podStartSLOduration=2.134988968 podStartE2EDuration="6.460641987s" podCreationTimestamp="2026-01-30 21:32:13 +0000 UTC" firstStartedPulling="2026-01-30 21:32:13.993513524 +0000 UTC m=+1072.739336183" lastFinishedPulling="2026-01-30 21:32:18.319166553 +0000 UTC m=+1077.064989202" observedRunningTime="2026-01-30 21:32:19.459362243 +0000 UTC m=+1078.205184892" watchObservedRunningTime="2026-01-30 21:32:19.460641987 +0000 UTC m=+1078.206464636" Jan 30 21:32:22 crc kubenswrapper[4751]: I0130 21:32:22.658555 4751 scope.go:117] "RemoveContainer" containerID="c37392c9c28591d30af6fa13864c5cce74c1af8be4cc91616fe120071a372d74" Jan 30 21:32:22 crc kubenswrapper[4751]: I0130 21:32:22.696533 4751 scope.go:117] "RemoveContainer" containerID="8fca4ce58dcc1f6c42dc0ef9782db856f25df74c010a55261aa5d6ba4308f0b1" Jan 30 21:32:22 crc kubenswrapper[4751]: I0130 21:32:22.719671 4751 scope.go:117] "RemoveContainer" containerID="8d83fa7db634ce7a7858c27562ccdf062d9dea0a838bb5aacc88523290613dfc" Jan 30 21:32:24 crc kubenswrapper[4751]: I0130 21:32:24.126601 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:32:24 crc kubenswrapper[4751]: I0130 21:32:24.126866 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:32:33 crc kubenswrapper[4751]: I0130 21:32:33.543595 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-597477f4b5-q868h" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.162686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6697664f96-w8tr4" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.844365 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9zjh6"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.849460 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.854101 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.854467 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.854693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-b78r7" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.861033 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.862266 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.865027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.876347 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-startup\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936336 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics-certs\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936355 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-sockets\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfd58\" (UniqueName: \"kubernetes.io/projected/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-kube-api-access-wfd58\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-reloader\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-conf\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.936913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rm4\" (UniqueName: \"kubernetes.io/projected/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-kube-api-access-n8rm4\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.964919 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zqbmp"] Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.974207 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zqbmp" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.978661 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.978894 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-n966j" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.979075 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 21:32:53 crc kubenswrapper[4751]: I0130 21:32:53.979091 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.005944 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-p8nst"] Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.010460 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.017966 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.024222 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p8nst"] Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038753 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfd58\" (UniqueName: \"kubernetes.io/projected/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-kube-api-access-wfd58\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038887 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-reloader\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-conf\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038968 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8rm4\" (UniqueName: \"kubernetes.io/projected/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-kube-api-access-n8rm4\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.038993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpsr\" (UniqueName: \"kubernetes.io/projected/e9fc7f0b-0bab-4435-82d8-b78841d64687-kube-api-access-ztpsr\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9fc7f0b-0bab-4435-82d8-b78841d64687-metallb-excludel2\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039082 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-startup\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-metrics-certs\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039181 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics-certs\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039202 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-sockets\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.039700 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-sockets\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.039845 4751 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.039907 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert podName:f8544a86-1b67-4c2e-9b56-ca708c47b4e8 nodeName:}" failed. No retries permitted until 2026-01-30 21:32:54.539885069 +0000 UTC m=+1113.285707718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert") pod "frr-k8s-webhook-server-7df86c4f6c-2zl97" (UID: "f8544a86-1b67-4c2e-9b56-ca708c47b4e8") : secret "frr-k8s-webhook-server-cert" not found Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.040771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-reloader\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.041055 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-conf\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.041522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.041993 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-frr-startup\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.059463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-metrics-certs\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.080164 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfd58\" (UniqueName: \"kubernetes.io/projected/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-kube-api-access-wfd58\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.085800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8rm4\" (UniqueName: \"kubernetes.io/projected/e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4-kube-api-access-n8rm4\") pod \"frr-k8s-9zjh6\" (UID: \"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4\") " pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127005 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127071 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127123 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127862 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.127935 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234" gracePeriod=600 Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztpsr\" (UniqueName: \"kubernetes.io/projected/e9fc7f0b-0bab-4435-82d8-b78841d64687-kube-api-access-ztpsr\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9fc7f0b-0bab-4435-82d8-b78841d64687-metallb-excludel2\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-cert\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-metrics-certs\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdw8\" (UniqueName: \"kubernetes.io/projected/41e79790-830a-48bb-93b6-dd55dc050acf-kube-api-access-5zdw8\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140663 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.140762 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.140811 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist podName:e9fc7f0b-0bab-4435-82d8-b78841d64687 nodeName:}" failed. No retries permitted until 2026-01-30 21:32:54.640794513 +0000 UTC m=+1113.386617162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist") pod "speaker-zqbmp" (UID: "e9fc7f0b-0bab-4435-82d8-b78841d64687") : secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.140828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-metrics-certs\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.141402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9fc7f0b-0bab-4435-82d8-b78841d64687-metallb-excludel2\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.145913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-metrics-certs\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.171770 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztpsr\" (UniqueName: \"kubernetes.io/projected/e9fc7f0b-0bab-4435-82d8-b78841d64687-kube-api-access-ztpsr\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.183032 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.241896 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-metrics-certs\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.242030 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-cert\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.242092 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zdw8\" (UniqueName: \"kubernetes.io/projected/41e79790-830a-48bb-93b6-dd55dc050acf-kube-api-access-5zdw8\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.244927 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.245778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-metrics-certs\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.257756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41e79790-830a-48bb-93b6-dd55dc050acf-cert\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.287250 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zdw8\" (UniqueName: \"kubernetes.io/projected/41e79790-830a-48bb-93b6-dd55dc050acf-kube-api-access-5zdw8\") pod \"controller-6968d8fdc4-p8nst\" (UID: \"41e79790-830a-48bb-93b6-dd55dc050acf\") " pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.385734 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.546734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.554766 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8544a86-1b67-4c2e-9b56-ca708c47b4e8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2zl97\" (UID: \"f8544a86-1b67-4c2e-9b56-ca708c47b4e8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.648861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.649085 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: E0130 21:32:54.649185 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist podName:e9fc7f0b-0bab-4435-82d8-b78841d64687 nodeName:}" failed. No retries permitted until 2026-01-30 21:32:55.649159183 +0000 UTC m=+1114.394981832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist") pod "speaker-zqbmp" (UID: "e9fc7f0b-0bab-4435-82d8-b78841d64687") : secret "metallb-memberlist" not found Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732236 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234" exitCode=0 Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732353 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234"} Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e"} Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.732457 4751 scope.go:117] "RemoveContainer" containerID="ad350159473538b7294a1cb17b3c91bed6ccae12ecd005a2dc1c208ac650225b" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.736077 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"4ad7da4df8cbf53720cc60439b0d1a9d6e905acdc358824f918e94585616669b"} Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.800517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:32:54 crc kubenswrapper[4751]: I0130 21:32:54.824493 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p8nst"] Jan 30 21:32:54 crc kubenswrapper[4751]: W0130 21:32:54.829921 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e79790_830a_48bb_93b6_dd55dc050acf.slice/crio-aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb WatchSource:0}: Error finding container aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb: Status 404 returned error can't find the container with id aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.250965 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97"] Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.669318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.684624 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9fc7f0b-0bab-4435-82d8-b78841d64687-memberlist\") pod \"speaker-zqbmp\" (UID: \"e9fc7f0b-0bab-4435-82d8-b78841d64687\") " pod="metallb-system/speaker-zqbmp" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p8nst" event={"ID":"41e79790-830a-48bb-93b6-dd55dc050acf","Type":"ContainerStarted","Data":"2f0cd3932ce00118212f2632f6400b2bf6938a51c21ebbd44cb2f5ccc96a28c3"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p8nst" event={"ID":"41e79790-830a-48bb-93b6-dd55dc050acf","Type":"ContainerStarted","Data":"4c7f186f59cf5dc402bf48f41ddf40e567aea195b575b317598bd105c7d3597c"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p8nst" event={"ID":"41e79790-830a-48bb-93b6-dd55dc050acf","Type":"ContainerStarted","Data":"aae88e56959a188e9891d429b5ece193865721750280788b7eb0aef6304365bb"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.747281 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.748580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" event={"ID":"f8544a86-1b67-4c2e-9b56-ca708c47b4e8","Type":"ContainerStarted","Data":"fad93262aeea3b4a3bf8ab24159939ed43c4fb1478eb2f0114a69d67832bfc7b"} Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.766862 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-p8nst" podStartSLOduration=2.76683709 podStartE2EDuration="2.76683709s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:32:55.759939215 +0000 UTC m=+1114.505761874" watchObservedRunningTime="2026-01-30 21:32:55.76683709 +0000 UTC m=+1114.512659739" Jan 30 21:32:55 crc kubenswrapper[4751]: I0130 21:32:55.799823 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zqbmp" Jan 30 21:32:55 crc kubenswrapper[4751]: W0130 21:32:55.824155 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9fc7f0b_0bab_4435_82d8_b78841d64687.slice/crio-422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80 WatchSource:0}: Error finding container 422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80: Status 404 returned error can't find the container with id 422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80 Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.763807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zqbmp" event={"ID":"e9fc7f0b-0bab-4435-82d8-b78841d64687","Type":"ContainerStarted","Data":"8d9587011defe44b1f694231d492bf571a8949856ff0c8c6fa191fcbba802a4f"} Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.765108 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zqbmp" event={"ID":"e9fc7f0b-0bab-4435-82d8-b78841d64687","Type":"ContainerStarted","Data":"219a584ce3a579f7f106484b76dd6ea3cbaa96058dc67a146891459a59d84c53"} Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.765212 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zqbmp" event={"ID":"e9fc7f0b-0bab-4435-82d8-b78841d64687","Type":"ContainerStarted","Data":"422233dc75e0ca29b1f929c27aaf33af34757d80dd89b02ec3b2e14e27700a80"} Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.765460 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zqbmp" Jan 30 21:32:56 crc kubenswrapper[4751]: I0130 21:32:56.793442 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zqbmp" podStartSLOduration=3.793422166 podStartE2EDuration="3.793422166s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:32:56.789341346 +0000 UTC m=+1115.535164015" watchObservedRunningTime="2026-01-30 21:32:56.793422166 +0000 UTC m=+1115.539244815" Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.823972 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4" containerID="cd3828ad15ab97c99197cec86fdc99e5269d1569ca2cbd900a865fcd55d21898" exitCode=0 Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.824095 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerDied","Data":"cd3828ad15ab97c99197cec86fdc99e5269d1569ca2cbd900a865fcd55d21898"} Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.829903 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" event={"ID":"f8544a86-1b67-4c2e-9b56-ca708c47b4e8","Type":"ContainerStarted","Data":"68aacfd2fb520af61841a9dec205ceccbddedc9f6aa718869bde323dd8c55696"} Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.830171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:33:02 crc kubenswrapper[4751]: I0130 21:33:02.876442 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" podStartSLOduration=2.636285471 podStartE2EDuration="9.876424279s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="2026-01-30 21:32:55.258799308 +0000 UTC m=+1114.004621957" lastFinishedPulling="2026-01-30 21:33:02.498938106 +0000 UTC m=+1121.244760765" observedRunningTime="2026-01-30 21:33:02.872975617 +0000 UTC m=+1121.618798266" watchObservedRunningTime="2026-01-30 21:33:02.876424279 +0000 UTC m=+1121.622246928" Jan 30 21:33:03 crc kubenswrapper[4751]: I0130 21:33:03.840237 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4" containerID="dabac1b2cefa268d14d21955b6c95f64e6052e5ef4033fb81eaa3dda2b12c5df" exitCode=0 Jan 30 21:33:03 crc kubenswrapper[4751]: I0130 21:33:03.840369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerDied","Data":"dabac1b2cefa268d14d21955b6c95f64e6052e5ef4033fb81eaa3dda2b12c5df"} Jan 30 21:33:04 crc kubenswrapper[4751]: I0130 21:33:04.394637 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-p8nst" Jan 30 21:33:04 crc kubenswrapper[4751]: I0130 21:33:04.853016 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4" containerID="5b69ce330fd4b3a70263367badfaa303248925286b00c1946afa56f219a59fa8" exitCode=0 Jan 30 21:33:04 crc kubenswrapper[4751]: I0130 21:33:04.853099 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerDied","Data":"5b69ce330fd4b3a70263367badfaa303248925286b00c1946afa56f219a59fa8"} Jan 30 21:33:05 crc kubenswrapper[4751]: I0130 21:33:05.865805 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"6199f52ebf105d57d4329cc5a413b7b8298d1c59ded29398a6a091c00fbc850b"} Jan 30 21:33:05 crc kubenswrapper[4751]: I0130 21:33:05.866120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"b9e4c85dceb8dcae3a6478781dfbdba2f255351604d0652c6adf308df71bf576"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878137 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"bac3a2e6f9198cc329b832ad6797550ab170be9fdf82f4ef1480030abdfd61e7"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"ea35b1eeeee23dfe668c962048df516377a773b512369ce63c18d112a1f5e9c6"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"eee5744605a48ea2d4d152a246f8caa40dcba4a63ba5a1e3980e5854560b4766"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9zjh6" event={"ID":"e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4","Type":"ContainerStarted","Data":"5d7837c995592c1ca2ab41530b875ef8f8888c556ed3b0016fffba70feb5d996"} Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.878761 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:06 crc kubenswrapper[4751]: I0130 21:33:06.905134 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9zjh6" podStartSLOduration=5.744765898 podStartE2EDuration="13.905115831s" podCreationTimestamp="2026-01-30 21:32:53 +0000 UTC" firstStartedPulling="2026-01-30 21:32:54.361958359 +0000 UTC m=+1113.107781008" lastFinishedPulling="2026-01-30 21:33:02.522308272 +0000 UTC m=+1121.268130941" observedRunningTime="2026-01-30 21:33:06.901753251 +0000 UTC m=+1125.647575940" watchObservedRunningTime="2026-01-30 21:33:06.905115831 +0000 UTC m=+1125.650938480" Jan 30 21:33:09 crc kubenswrapper[4751]: I0130 21:33:09.183706 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:09 crc kubenswrapper[4751]: I0130 21:33:09.238701 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:14 crc kubenswrapper[4751]: I0130 21:33:14.808201 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2zl97" Jan 30 21:33:15 crc kubenswrapper[4751]: I0130 21:33:15.802626 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zqbmp" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.633717 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.635341 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.645894 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.649736 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-c2zxr" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.649966 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.650150 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.716945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"openstack-operator-index-6xdfj\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.818667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"openstack-operator-index-6xdfj\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.850934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"openstack-operator-index-6xdfj\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:18 crc kubenswrapper[4751]: I0130 21:33:18.991277 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:19 crc kubenswrapper[4751]: I0130 21:33:19.453898 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:20 crc kubenswrapper[4751]: I0130 21:33:20.000311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerStarted","Data":"5085fba647ce3ba6d64c0eb174bd6b9b77d5b038e1666bf227ff6b20406417e9"} Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.011184 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.608072 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lw6gm"] Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.609677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.621623 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lw6gm"] Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.707931 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqf5w\" (UniqueName: \"kubernetes.io/projected/bd6eaa60-4995-4ace-8ab0-a880f09cbee0-kube-api-access-bqf5w\") pod \"openstack-operator-index-lw6gm\" (UID: \"bd6eaa60-4995-4ace-8ab0-a880f09cbee0\") " pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.809649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqf5w\" (UniqueName: \"kubernetes.io/projected/bd6eaa60-4995-4ace-8ab0-a880f09cbee0-kube-api-access-bqf5w\") pod \"openstack-operator-index-lw6gm\" (UID: \"bd6eaa60-4995-4ace-8ab0-a880f09cbee0\") " pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.834056 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqf5w\" (UniqueName: \"kubernetes.io/projected/bd6eaa60-4995-4ace-8ab0-a880f09cbee0-kube-api-access-bqf5w\") pod \"openstack-operator-index-lw6gm\" (UID: \"bd6eaa60-4995-4ace-8ab0-a880f09cbee0\") " pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:22 crc kubenswrapper[4751]: I0130 21:33:22.945211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.035427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerStarted","Data":"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181"} Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.035574 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6xdfj" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" containerID="cri-o://9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" gracePeriod=2 Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.414529 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6xdfj" podStartSLOduration=2.930631172 podStartE2EDuration="5.414512634s" podCreationTimestamp="2026-01-30 21:33:18 +0000 UTC" firstStartedPulling="2026-01-30 21:33:19.459950647 +0000 UTC m=+1138.205773296" lastFinishedPulling="2026-01-30 21:33:21.943832099 +0000 UTC m=+1140.689654758" observedRunningTime="2026-01-30 21:33:23.06282562 +0000 UTC m=+1141.808648269" watchObservedRunningTime="2026-01-30 21:33:23.414512634 +0000 UTC m=+1142.160335283" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.420344 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lw6gm"] Jan 30 21:33:23 crc kubenswrapper[4751]: W0130 21:33:23.424866 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6eaa60_4995_4ace_8ab0_a880f09cbee0.slice/crio-173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4 WatchSource:0}: Error finding container 173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4: Status 404 returned error can't find the container with id 173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4 Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.754300 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.827133 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") pod \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\" (UID: \"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de\") " Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.833681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv" (OuterVolumeSpecName: "kube-api-access-cmxrv") pod "c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" (UID: "c7891c80-5b04-4a6e-8b3b-8b68efa8a6de"). InnerVolumeSpecName "kube-api-access-cmxrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:33:23 crc kubenswrapper[4751]: I0130 21:33:23.929782 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmxrv\" (UniqueName: \"kubernetes.io/projected/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de-kube-api-access-cmxrv\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045222 4751 generic.go:334] "Generic (PLEG): container finished" podID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" exitCode=0 Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045295 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerDied","Data":"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6xdfj" event={"ID":"c7891c80-5b04-4a6e-8b3b-8b68efa8a6de","Type":"ContainerDied","Data":"5085fba647ce3ba6d64c0eb174bd6b9b77d5b038e1666bf227ff6b20406417e9"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045367 4751 scope.go:117] "RemoveContainer" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.045477 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6xdfj" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.046857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lw6gm" event={"ID":"bd6eaa60-4995-4ace-8ab0-a880f09cbee0","Type":"ContainerStarted","Data":"04a01555b411ff9399ce63606967837bf71c09ec36c90c1b441a41babccc5f24"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.046900 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lw6gm" event={"ID":"bd6eaa60-4995-4ace-8ab0-a880f09cbee0","Type":"ContainerStarted","Data":"173417603d83ece8479c25c17c4d08e017273465709efb2719a980a267da39f4"} Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.069727 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lw6gm" podStartSLOduration=1.812344234 podStartE2EDuration="2.069706368s" podCreationTimestamp="2026-01-30 21:33:22 +0000 UTC" firstStartedPulling="2026-01-30 21:33:23.428240332 +0000 UTC m=+1142.174062981" lastFinishedPulling="2026-01-30 21:33:23.685602466 +0000 UTC m=+1142.431425115" observedRunningTime="2026-01-30 21:33:24.061571 +0000 UTC m=+1142.807393649" watchObservedRunningTime="2026-01-30 21:33:24.069706368 +0000 UTC m=+1142.815529017" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.074782 4751 scope.go:117] "RemoveContainer" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" Jan 30 21:33:24 crc kubenswrapper[4751]: E0130 21:33:24.075224 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181\": container with ID starting with 9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181 not found: ID does not exist" containerID="9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.075267 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181"} err="failed to get container status \"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181\": rpc error: code = NotFound desc = could not find container \"9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181\": container with ID starting with 9376319cc15e13d6262375aad9167f8895d809e414a3e8d7a7d4cf10f1c32181 not found: ID does not exist" Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.082114 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.088959 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6xdfj"] Jan 30 21:33:24 crc kubenswrapper[4751]: I0130 21:33:24.185767 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9zjh6" Jan 30 21:33:25 crc kubenswrapper[4751]: I0130 21:33:25.994188 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" path="/var/lib/kubelet/pods/c7891c80-5b04-4a6e-8b3b-8b68efa8a6de/volumes" Jan 30 21:33:32 crc kubenswrapper[4751]: I0130 21:33:32.945740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:32 crc kubenswrapper[4751]: I0130 21:33:32.946347 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:32 crc kubenswrapper[4751]: I0130 21:33:32.994873 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:33 crc kubenswrapper[4751]: I0130 21:33:33.189463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lw6gm" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.854825 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m"] Jan 30 21:33:34 crc kubenswrapper[4751]: E0130 21:33:34.855182 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.855196 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.855398 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7891c80-5b04-4a6e-8b3b-8b68efa8a6de" containerName="registry-server" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.856749 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.859750 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hcnbk" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.870508 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m"] Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.941463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.941578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:34 crc kubenswrapper[4751]: I0130 21:33:34.941674 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.042700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.042793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.042884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.043533 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.044809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.065302 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.179762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:35 crc kubenswrapper[4751]: I0130 21:33:35.662196 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m"] Jan 30 21:33:36 crc kubenswrapper[4751]: I0130 21:33:36.173190 4751 generic.go:334] "Generic (PLEG): container finished" podID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerID="1c1c6d4eeb91da764a539ab8a25b7d05749856e70d897106bbe6532c1dabd417" exitCode=0 Jan 30 21:33:36 crc kubenswrapper[4751]: I0130 21:33:36.173253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"1c1c6d4eeb91da764a539ab8a25b7d05749856e70d897106bbe6532c1dabd417"} Jan 30 21:33:36 crc kubenswrapper[4751]: I0130 21:33:36.173485 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerStarted","Data":"7880f66f06961b7d6ee3bd6e6f25aa3acac10467a2d0d4d0f52f9bad3784aa79"} Jan 30 21:33:37 crc kubenswrapper[4751]: I0130 21:33:37.187422 4751 generic.go:334] "Generic (PLEG): container finished" podID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerID="19014789ea6964b73200fa154d8d665762c73630693484d03da67952be842860" exitCode=0 Jan 30 21:33:37 crc kubenswrapper[4751]: I0130 21:33:37.187545 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"19014789ea6964b73200fa154d8d665762c73630693484d03da67952be842860"} Jan 30 21:33:38 crc kubenswrapper[4751]: I0130 21:33:38.204782 4751 generic.go:334] "Generic (PLEG): container finished" podID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerID="be1d4829ebfafdab5f1e2d575ec83c7e9a122f738f0913a2c9ff50b4666798fc" exitCode=0 Jan 30 21:33:38 crc kubenswrapper[4751]: I0130 21:33:38.205187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"be1d4829ebfafdab5f1e2d575ec83c7e9a122f738f0913a2c9ff50b4666798fc"} Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.653247 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.731796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") pod \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.731955 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") pod \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.732041 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") pod \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\" (UID: \"8fed4afd-9214-4ec9-816d-2ba6213f2f89\") " Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.732679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle" (OuterVolumeSpecName: "bundle") pod "8fed4afd-9214-4ec9-816d-2ba6213f2f89" (UID: "8fed4afd-9214-4ec9-816d-2ba6213f2f89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.739575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl" (OuterVolumeSpecName: "kube-api-access-vb9cl") pod "8fed4afd-9214-4ec9-816d-2ba6213f2f89" (UID: "8fed4afd-9214-4ec9-816d-2ba6213f2f89"). InnerVolumeSpecName "kube-api-access-vb9cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.748132 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util" (OuterVolumeSpecName: "util") pod "8fed4afd-9214-4ec9-816d-2ba6213f2f89" (UID: "8fed4afd-9214-4ec9-816d-2ba6213f2f89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.834561 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb9cl\" (UniqueName: \"kubernetes.io/projected/8fed4afd-9214-4ec9-816d-2ba6213f2f89-kube-api-access-vb9cl\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.834592 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:39 crc kubenswrapper[4751]: I0130 21:33:39.834601 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8fed4afd-9214-4ec9-816d-2ba6213f2f89-util\") on node \"crc\" DevicePath \"\"" Jan 30 21:33:40 crc kubenswrapper[4751]: I0130 21:33:40.228047 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" event={"ID":"8fed4afd-9214-4ec9-816d-2ba6213f2f89","Type":"ContainerDied","Data":"7880f66f06961b7d6ee3bd6e6f25aa3acac10467a2d0d4d0f52f9bad3784aa79"} Jan 30 21:33:40 crc kubenswrapper[4751]: I0130 21:33:40.228108 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7880f66f06961b7d6ee3bd6e6f25aa3acac10467a2d0d4d0f52f9bad3784aa79" Jan 30 21:33:40 crc kubenswrapper[4751]: I0130 21:33:40.228122 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.778686 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh"] Jan 30 21:33:46 crc kubenswrapper[4751]: E0130 21:33:46.779633 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="pull" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779650 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="pull" Jan 30 21:33:46 crc kubenswrapper[4751]: E0130 21:33:46.779694 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="extract" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779702 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="extract" Jan 30 21:33:46 crc kubenswrapper[4751]: E0130 21:33:46.779724 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="util" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779734 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="util" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.779901 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fed4afd-9214-4ec9-816d-2ba6213f2f89" containerName="extract" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.780508 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.782933 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z5j4l" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.807857 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh"] Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.827838 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmp4\" (UniqueName: \"kubernetes.io/projected/4b543295-a1a6-40ad-8b74-0ee6fdeb66c3-kube-api-access-kxmp4\") pod \"openstack-operator-controller-init-55fdcd6c79-9hzxh\" (UID: \"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3\") " pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.929286 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmp4\" (UniqueName: \"kubernetes.io/projected/4b543295-a1a6-40ad-8b74-0ee6fdeb66c3-kube-api-access-kxmp4\") pod \"openstack-operator-controller-init-55fdcd6c79-9hzxh\" (UID: \"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3\") " pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:46 crc kubenswrapper[4751]: I0130 21:33:46.952142 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmp4\" (UniqueName: \"kubernetes.io/projected/4b543295-a1a6-40ad-8b74-0ee6fdeb66c3-kube-api-access-kxmp4\") pod \"openstack-operator-controller-init-55fdcd6c79-9hzxh\" (UID: \"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3\") " pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:47 crc kubenswrapper[4751]: I0130 21:33:47.100117 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:47 crc kubenswrapper[4751]: I0130 21:33:47.532849 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh"] Jan 30 21:33:48 crc kubenswrapper[4751]: I0130 21:33:48.300051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" event={"ID":"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3","Type":"ContainerStarted","Data":"f3bc77c8b555eea9398a5ffbbdf29466862a4aa04d846d3bb25cdf36aeff59c8"} Jan 30 21:33:52 crc kubenswrapper[4751]: I0130 21:33:52.353083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" event={"ID":"4b543295-a1a6-40ad-8b74-0ee6fdeb66c3","Type":"ContainerStarted","Data":"6a0c2990e2faf280012ac6c31ac61a3e5de4c2543e26bfd80b31434ec494eb62"} Jan 30 21:33:52 crc kubenswrapper[4751]: I0130 21:33:52.353907 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:33:52 crc kubenswrapper[4751]: I0130 21:33:52.389895 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" podStartSLOduration=1.781139526 podStartE2EDuration="6.389868562s" podCreationTimestamp="2026-01-30 21:33:46 +0000 UTC" firstStartedPulling="2026-01-30 21:33:47.54014786 +0000 UTC m=+1166.285970509" lastFinishedPulling="2026-01-30 21:33:52.148876896 +0000 UTC m=+1170.894699545" observedRunningTime="2026-01-30 21:33:52.380429361 +0000 UTC m=+1171.126252050" watchObservedRunningTime="2026-01-30 21:33:52.389868562 +0000 UTC m=+1171.135691231" Jan 30 21:33:57 crc kubenswrapper[4751]: I0130 21:33:57.105198 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55fdcd6c79-9hzxh" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.603990 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.605465 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.612348 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-j569z" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.624114 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.625452 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.627308 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6q6zs" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.635634 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.640967 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.641897 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.650797 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.660118 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lvzzp" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.665429 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.666443 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.668127 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tlnws" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.684171 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.695470 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.700492 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.701530 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.703987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkgj\" (UniqueName: \"kubernetes.io/projected/236db419-e197-4a85-ab49-58cf38babea6-kube-api-access-clkgj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7mpjw\" (UID: \"236db419-e197-4a85-ab49-58cf38babea6\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.704058 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgfs\" (UniqueName: \"kubernetes.io/projected/f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9-kube-api-access-wqgfs\") pod \"designate-operator-controller-manager-6d9697b7f4-ph5lf\" (UID: \"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.704082 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccf7\" (UniqueName: \"kubernetes.io/projected/9003ffe6-59a3-4c7c-96d0-d129a9339247-kube-api-access-rccf7\") pod \"cinder-operator-controller-manager-8d874c8fc-6fg4r\" (UID: \"9003ffe6-59a3-4c7c-96d0-d129a9339247\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.704434 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-c76qn" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.715740 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.781389 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.782497 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.788730 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bq788" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.796001 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.797006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkgj\" (UniqueName: \"kubernetes.io/projected/236db419-e197-4a85-ab49-58cf38babea6-kube-api-access-clkgj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7mpjw\" (UID: \"236db419-e197-4a85-ab49-58cf38babea6\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806940 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbx4\" (UniqueName: \"kubernetes.io/projected/3fae5204-d3a1-4e39-ac3d-d28c8a55c7db-kube-api-access-4nbx4\") pod \"heat-operator-controller-manager-69d6db494d-jxkmf\" (UID: \"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgfs\" (UniqueName: \"kubernetes.io/projected/f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9-kube-api-access-wqgfs\") pod \"designate-operator-controller-manager-6d9697b7f4-ph5lf\" (UID: \"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.806996 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccf7\" (UniqueName: \"kubernetes.io/projected/9003ffe6-59a3-4c7c-96d0-d129a9339247-kube-api-access-rccf7\") pod \"cinder-operator-controller-manager-8d874c8fc-6fg4r\" (UID: \"9003ffe6-59a3-4c7c-96d0-d129a9339247\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.807063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wpd\" (UniqueName: \"kubernetes.io/projected/0fd5051a-5be4-4336-af86-9674469b76a0-kube-api-access-74wpd\") pod \"glance-operator-controller-manager-8886f4c47-b65fl\" (UID: \"0fd5051a-5be4-4336-af86-9674469b76a0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.811722 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c58hn" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.828130 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-52vr2"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.836666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccf7\" (UniqueName: \"kubernetes.io/projected/9003ffe6-59a3-4c7c-96d0-d129a9339247-kube-api-access-rccf7\") pod \"cinder-operator-controller-manager-8d874c8fc-6fg4r\" (UID: \"9003ffe6-59a3-4c7c-96d0-d129a9339247\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.836867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkgj\" (UniqueName: \"kubernetes.io/projected/236db419-e197-4a85-ab49-58cf38babea6-kube-api-access-clkgj\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7mpjw\" (UID: \"236db419-e197-4a85-ab49-58cf38babea6\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.839042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgfs\" (UniqueName: \"kubernetes.io/projected/f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9-kube-api-access-wqgfs\") pod \"designate-operator-controller-manager-6d9697b7f4-ph5lf\" (UID: \"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850414 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850476 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-52vr2"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850486 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.850575 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.853058 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p7wfp" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.853540 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.860715 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.862064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.871975 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4f62r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.872104 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.873071 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.875735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8lzdh" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.894793 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.908643 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2fmm\" (UniqueName: \"kubernetes.io/projected/9a88f139-89db-4b3a-8fea-bf951e59f564-kube-api-access-t2fmm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-n2shb\" (UID: \"9a88f139-89db-4b3a-8fea-bf951e59f564\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909535 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zc9p\" (UniqueName: \"kubernetes.io/projected/0b3a96d4-f5fc-47be-9c28-47239b2488c1-kube-api-access-4zc9p\") pod \"horizon-operator-controller-manager-5fb775575f-hsbbr\" (UID: \"0b3a96d4-f5fc-47be-9c28-47239b2488c1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909564 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbx4\" (UniqueName: \"kubernetes.io/projected/3fae5204-d3a1-4e39-ac3d-d28c8a55c7db-kube-api-access-4nbx4\") pod \"heat-operator-controller-manager-69d6db494d-jxkmf\" (UID: \"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.909628 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74wpd\" (UniqueName: \"kubernetes.io/projected/0fd5051a-5be4-4336-af86-9674469b76a0-kube-api-access-74wpd\") pod \"glance-operator-controller-manager-8886f4c47-b65fl\" (UID: \"0fd5051a-5be4-4336-af86-9674469b76a0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.920772 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.922601 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.927374 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.927956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gxtrq" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.937854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbx4\" (UniqueName: \"kubernetes.io/projected/3fae5204-d3a1-4e39-ac3d-d28c8a55c7db-kube-api-access-4nbx4\") pod \"heat-operator-controller-manager-69d6db494d-jxkmf\" (UID: \"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.942726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74wpd\" (UniqueName: \"kubernetes.io/projected/0fd5051a-5be4-4336-af86-9674469b76a0-kube-api-access-74wpd\") pod \"glance-operator-controller-manager-8886f4c47-b65fl\" (UID: \"0fd5051a-5be4-4336-af86-9674469b76a0\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.947024 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.957997 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb"] Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.959435 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.964868 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gbq4n" Jan 30 21:34:17 crc kubenswrapper[4751]: I0130 21:34:17.965739 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.003667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.010406 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2fmm\" (UniqueName: \"kubernetes.io/projected/9a88f139-89db-4b3a-8fea-bf951e59f564-kube-api-access-t2fmm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-n2shb\" (UID: \"9a88f139-89db-4b3a-8fea-bf951e59f564\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628b2\" (UniqueName: \"kubernetes.io/projected/1ad347ea-d2ce-4a1e-912a-8471445396f7-kube-api-access-628b2\") pod \"mariadb-operator-controller-manager-67bf948998-xk52h\" (UID: \"1ad347ea-d2ce-4a1e-912a-8471445396f7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zc9p\" (UniqueName: \"kubernetes.io/projected/0b3a96d4-f5fc-47be-9c28-47239b2488c1-kube-api-access-4zc9p\") pod \"horizon-operator-controller-manager-5fb775575f-hsbbr\" (UID: \"0b3a96d4-f5fc-47be-9c28-47239b2488c1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015684 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gn9l\" (UniqueName: \"kubernetes.io/projected/694b29bc-994c-4983-81c7-b32d47db553b-kube-api-access-9gn9l\") pod \"manila-operator-controller-manager-7dd968899f-7sk5v\" (UID: \"694b29bc-994c-4983-81c7-b32d47db553b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015712 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnrd\" (UniqueName: \"kubernetes.io/projected/b2777bff-2cca-4f41-8655-a737f13b4885-kube-api-access-gcnrd\") pod \"keystone-operator-controller-manager-84f48565d4-sw6zv\" (UID: \"b2777bff-2cca-4f41-8655-a737f13b4885\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.015825 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55lm\" (UniqueName: \"kubernetes.io/projected/2d6f1acc-6416-44ae-9082-3ebe16dce448-kube-api-access-j55lm\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.036652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.040769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zc9p\" (UniqueName: \"kubernetes.io/projected/0b3a96d4-f5fc-47be-9c28-47239b2488c1-kube-api-access-4zc9p\") pod \"horizon-operator-controller-manager-5fb775575f-hsbbr\" (UID: \"0b3a96d4-f5fc-47be-9c28-47239b2488c1\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.045271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2fmm\" (UniqueName: \"kubernetes.io/projected/9a88f139-89db-4b3a-8fea-bf951e59f564-kube-api-access-t2fmm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-n2shb\" (UID: \"9a88f139-89db-4b3a-8fea-bf951e59f564\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.068601 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.073918 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.075346 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.081265 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6zms8" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.081367 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.107926 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117249 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gn9l\" (UniqueName: \"kubernetes.io/projected/694b29bc-994c-4983-81c7-b32d47db553b-kube-api-access-9gn9l\") pod \"manila-operator-controller-manager-7dd968899f-7sk5v\" (UID: \"694b29bc-994c-4983-81c7-b32d47db553b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117311 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117357 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnrd\" (UniqueName: \"kubernetes.io/projected/b2777bff-2cca-4f41-8655-a737f13b4885-kube-api-access-gcnrd\") pod \"keystone-operator-controller-manager-84f48565d4-sw6zv\" (UID: \"b2777bff-2cca-4f41-8655-a737f13b4885\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117426 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fxp\" (UniqueName: \"kubernetes.io/projected/4a416a7c-3094-46ef-8370-9cad7446339b-kube-api-access-k8fxp\") pod \"neutron-operator-controller-manager-585dbc889-9vvgb\" (UID: \"4a416a7c-3094-46ef-8370-9cad7446339b\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55lm\" (UniqueName: \"kubernetes.io/projected/2d6f1acc-6416-44ae-9082-3ebe16dce448-kube-api-access-j55lm\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.117585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628b2\" (UniqueName: \"kubernetes.io/projected/1ad347ea-d2ce-4a1e-912a-8471445396f7-kube-api-access-628b2\") pod \"mariadb-operator-controller-manager-67bf948998-xk52h\" (UID: \"1ad347ea-d2ce-4a1e-912a-8471445396f7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.118655 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.118726 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:18.61870954 +0000 UTC m=+1197.364532189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.119193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.122634 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.124374 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.130692 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w7wtt" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.136266 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.139954 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.143405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnrd\" (UniqueName: \"kubernetes.io/projected/b2777bff-2cca-4f41-8655-a737f13b4885-kube-api-access-gcnrd\") pod \"keystone-operator-controller-manager-84f48565d4-sw6zv\" (UID: \"b2777bff-2cca-4f41-8655-a737f13b4885\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.143645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628b2\" (UniqueName: \"kubernetes.io/projected/1ad347ea-d2ce-4a1e-912a-8471445396f7-kube-api-access-628b2\") pod \"mariadb-operator-controller-manager-67bf948998-xk52h\" (UID: \"1ad347ea-d2ce-4a1e-912a-8471445396f7\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.143717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gn9l\" (UniqueName: \"kubernetes.io/projected/694b29bc-994c-4983-81c7-b32d47db553b-kube-api-access-9gn9l\") pod \"manila-operator-controller-manager-7dd968899f-7sk5v\" (UID: \"694b29bc-994c-4983-81c7-b32d47db553b\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.147732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55lm\" (UniqueName: \"kubernetes.io/projected/2d6f1acc-6416-44ae-9082-3ebe16dce448-kube-api-access-j55lm\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.148427 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.148657 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bp5hg" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.157822 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.164650 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.164732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.166182 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tbmz5" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.171486 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.194803 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.207413 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.209830 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.213110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8jmcf" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219205 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x9t\" (UniqueName: \"kubernetes.io/projected/0026e471-8226-4038-8c52-f0add2877c8d-kube-api-access-97x9t\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxlg\" (UniqueName: \"kubernetes.io/projected/fcf49997-888f-4e58-99e7-f1f677dc7111-kube-api-access-6vxlg\") pod \"octavia-operator-controller-manager-6687f8d877-d6slz\" (UID: \"fcf49997-888f-4e58-99e7-f1f677dc7111\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szp4h\" (UniqueName: \"kubernetes.io/projected/e596dcc9-7f31-4312-99e3-7d86d318ef9d-kube-api-access-szp4h\") pod \"nova-operator-controller-manager-55bff696bd-tbp7n\" (UID: \"e596dcc9-7f31-4312-99e3-7d86d318ef9d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8fxp\" (UniqueName: \"kubernetes.io/projected/4a416a7c-3094-46ef-8370-9cad7446339b-kube-api-access-k8fxp\") pod \"neutron-operator-controller-manager-585dbc889-9vvgb\" (UID: \"4a416a7c-3094-46ef-8370-9cad7446339b\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.219445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxgxr\" (UniqueName: \"kubernetes.io/projected/c711cf07-a695-447a-8d01-147b10e9059f-kube-api-access-vxgxr\") pod \"ovn-operator-controller-manager-788c46999f-c7tj6\" (UID: \"c711cf07-a695-447a-8d01-147b10e9059f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.233269 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.238649 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.239795 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.248748 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jjwmz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.258550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8fxp\" (UniqueName: \"kubernetes.io/projected/4a416a7c-3094-46ef-8370-9cad7446339b-kube-api-access-k8fxp\") pod \"neutron-operator-controller-manager-585dbc889-9vvgb\" (UID: \"4a416a7c-3094-46ef-8370-9cad7446339b\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.263967 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.308436 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.309686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.316677 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-njldh" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322551 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/0c86abfd-77a9-4388-8b7f-b61bb378f7cb-kube-api-access-kt5hw\") pod \"swift-operator-controller-manager-68fc8c869-r6smn\" (UID: \"0c86abfd-77a9-4388-8b7f-b61bb378f7cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322612 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxgxr\" (UniqueName: \"kubernetes.io/projected/c711cf07-a695-447a-8d01-147b10e9059f-kube-api-access-vxgxr\") pod \"ovn-operator-controller-manager-788c46999f-c7tj6\" (UID: \"c711cf07-a695-447a-8d01-147b10e9059f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6d94\" (UniqueName: \"kubernetes.io/projected/ace28553-76bc-4472-a671-788e1fb9a1ff-kube-api-access-c6d94\") pod \"placement-operator-controller-manager-5b964cf4cd-dx8wk\" (UID: \"ace28553-76bc-4472-a671-788e1fb9a1ff\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x9t\" (UniqueName: \"kubernetes.io/projected/0026e471-8226-4038-8c52-f0add2877c8d-kube-api-access-97x9t\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322815 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxlg\" (UniqueName: \"kubernetes.io/projected/fcf49997-888f-4e58-99e7-f1f677dc7111-kube-api-access-6vxlg\") pod \"octavia-operator-controller-manager-6687f8d877-d6slz\" (UID: \"fcf49997-888f-4e58-99e7-f1f677dc7111\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.322834 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.322858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szp4h\" (UniqueName: \"kubernetes.io/projected/e596dcc9-7f31-4312-99e3-7d86d318ef9d-kube-api-access-szp4h\") pod \"nova-operator-controller-manager-55bff696bd-tbp7n\" (UID: \"e596dcc9-7f31-4312-99e3-7d86d318ef9d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.322885 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:18.822868093 +0000 UTC m=+1197.568690742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.327977 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.328815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.342946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxgxr\" (UniqueName: \"kubernetes.io/projected/c711cf07-a695-447a-8d01-147b10e9059f-kube-api-access-vxgxr\") pod \"ovn-operator-controller-manager-788c46999f-c7tj6\" (UID: \"c711cf07-a695-447a-8d01-147b10e9059f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.346774 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szp4h\" (UniqueName: \"kubernetes.io/projected/e596dcc9-7f31-4312-99e3-7d86d318ef9d-kube-api-access-szp4h\") pod \"nova-operator-controller-manager-55bff696bd-tbp7n\" (UID: \"e596dcc9-7f31-4312-99e3-7d86d318ef9d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.347928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x9t\" (UniqueName: \"kubernetes.io/projected/0026e471-8226-4038-8c52-f0add2877c8d-kube-api-access-97x9t\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.354382 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.365064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.375199 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.405947 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.407503 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.407737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.415642 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-t5rnt" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.424096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxlg\" (UniqueName: \"kubernetes.io/projected/fcf49997-888f-4e58-99e7-f1f677dc7111-kube-api-access-6vxlg\") pod \"octavia-operator-controller-manager-6687f8d877-d6slz\" (UID: \"fcf49997-888f-4e58-99e7-f1f677dc7111\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.425841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjsd\" (UniqueName: \"kubernetes.io/projected/3b9cc057-30d7-4a03-8c76-a1ca7200dbae-kube-api-access-szjsd\") pod \"telemetry-operator-controller-manager-6749767b8f-62rqr\" (UID: \"3b9cc057-30d7-4a03-8c76-a1ca7200dbae\") " pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.425894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/0c86abfd-77a9-4388-8b7f-b61bb378f7cb-kube-api-access-kt5hw\") pod \"swift-operator-controller-manager-68fc8c869-r6smn\" (UID: \"0c86abfd-77a9-4388-8b7f-b61bb378f7cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.425986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6d94\" (UniqueName: \"kubernetes.io/projected/ace28553-76bc-4472-a671-788e1fb9a1ff-kube-api-access-c6d94\") pod \"placement-operator-controller-manager-5b964cf4cd-dx8wk\" (UID: \"ace28553-76bc-4472-a671-788e1fb9a1ff\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.449084 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.450719 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.503500 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/0c86abfd-77a9-4388-8b7f-b61bb378f7cb-kube-api-access-kt5hw\") pod \"swift-operator-controller-manager-68fc8c869-r6smn\" (UID: \"0c86abfd-77a9-4388-8b7f-b61bb378f7cb\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.512977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6d94\" (UniqueName: \"kubernetes.io/projected/ace28553-76bc-4472-a671-788e1fb9a1ff-kube-api-access-c6d94\") pod \"placement-operator-controller-manager-5b964cf4cd-dx8wk\" (UID: \"ace28553-76bc-4472-a671-788e1fb9a1ff\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.528559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjsd\" (UniqueName: \"kubernetes.io/projected/3b9cc057-30d7-4a03-8c76-a1ca7200dbae-kube-api-access-szjsd\") pod \"telemetry-operator-controller-manager-6749767b8f-62rqr\" (UID: \"3b9cc057-30d7-4a03-8c76-a1ca7200dbae\") " pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.528668 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdvn\" (UniqueName: \"kubernetes.io/projected/3d59cc79-1a37-434a-a04b-156739f469d7-kube-api-access-mcdvn\") pod \"test-operator-controller-manager-56f8bfcd9f-sc9gq\" (UID: \"3d59cc79-1a37-434a-a04b-156739f469d7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.536479 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gcvgx"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.537603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.540623 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f42hl" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.565108 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gcvgx"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.578902 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjsd\" (UniqueName: \"kubernetes.io/projected/3b9cc057-30d7-4a03-8c76-a1ca7200dbae-kube-api-access-szjsd\") pod \"telemetry-operator-controller-manager-6749767b8f-62rqr\" (UID: \"3b9cc057-30d7-4a03-8c76-a1ca7200dbae\") " pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.588591 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.608111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.630415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcdvn\" (UniqueName: \"kubernetes.io/projected/3d59cc79-1a37-434a-a04b-156739f469d7-kube-api-access-mcdvn\") pod \"test-operator-controller-manager-56f8bfcd9f-sc9gq\" (UID: \"3d59cc79-1a37-434a-a04b-156739f469d7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.630459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88tl\" (UniqueName: \"kubernetes.io/projected/cbae5889-938b-4211-94a6-de960df2f95d-kube-api-access-l88tl\") pod \"watcher-operator-controller-manager-564965969-gcvgx\" (UID: \"cbae5889-938b-4211-94a6-de960df2f95d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.630603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.631088 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.631138 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.631122786 +0000 UTC m=+1198.376945435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.633018 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.636424 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.650868 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.663116 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.663765 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bxxmp" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.664695 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.665469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.705609 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcdvn\" (UniqueName: \"kubernetes.io/projected/3d59cc79-1a37-434a-a04b-156739f469d7-kube-api-access-mcdvn\") pod \"test-operator-controller-manager-56f8bfcd9f-sc9gq\" (UID: \"3d59cc79-1a37-434a-a04b-156739f469d7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.713921 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732214 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l88tl\" (UniqueName: \"kubernetes.io/projected/cbae5889-938b-4211-94a6-de960df2f95d-kube-api-access-l88tl\") pod \"watcher-operator-controller-manager-564965969-gcvgx\" (UID: \"cbae5889-938b-4211-94a6-de960df2f95d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.732493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldbr\" (UniqueName: \"kubernetes.io/projected/dac6f1f3-8549-488c-bb63-aa980f4a1282-kube-api-access-lldbr\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.756507 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.758741 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.762065 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.762669 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pfv2x" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.762929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l88tl\" (UniqueName: \"kubernetes.io/projected/cbae5889-938b-4211-94a6-de960df2f95d-kube-api-access-l88tl\") pod \"watcher-operator-controller-manager-564965969-gcvgx\" (UID: \"cbae5889-938b-4211-94a6-de960df2f95d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.843641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lldbr\" (UniqueName: \"kubernetes.io/projected/dac6f1f3-8549-488c-bb63-aa980f4a1282-kube-api-access-lldbr\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844082 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.844222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p4vl\" (UniqueName: \"kubernetes.io/projected/a986231c-2119-4a13-801d-51119db5d365-kube-api-access-5p4vl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8vch\" (UID: \"a986231c-2119-4a13-801d-51119db5d365\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844234 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844292 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.84427523 +0000 UTC m=+1198.590097879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844311 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844397 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.344373352 +0000 UTC m=+1198.090195991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844410 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: E0130 21:34:18.844467 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:19.344451754 +0000 UTC m=+1198.090274403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.869673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lldbr\" (UniqueName: \"kubernetes.io/projected/dac6f1f3-8549-488c-bb63-aa980f4a1282-kube-api-access-lldbr\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.882912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw"] Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.911964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.945665 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p4vl\" (UniqueName: \"kubernetes.io/projected/a986231c-2119-4a13-801d-51119db5d365-kube-api-access-5p4vl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8vch\" (UID: \"a986231c-2119-4a13-801d-51119db5d365\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.968036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p4vl\" (UniqueName: \"kubernetes.io/projected/a986231c-2119-4a13-801d-51119db5d365-kube-api-access-5p4vl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v8vch\" (UID: \"a986231c-2119-4a13-801d-51119db5d365\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:18 crc kubenswrapper[4751]: I0130 21:34:18.969307 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.103256 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.353237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353427 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.353598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353677 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:20.353652045 +0000 UTC m=+1199.099474724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353843 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.353895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:20.353879401 +0000 UTC m=+1199.099702050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.606965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" event={"ID":"236db419-e197-4a85-ab49-58cf38babea6","Type":"ContainerStarted","Data":"2ba4406bb67582ee9f558fbfbada8121f5b51f13670b02de497c442bd9ca8830"} Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.658582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.658746 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.658817 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:21.658798946 +0000 UTC m=+1200.404621605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.828384 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf"] Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.841848 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl"] Jan 30 21:34:19 crc kubenswrapper[4751]: W0130 21:34:19.848668 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9003ffe6_59a3_4c7c_96d0_d129a9339247.slice/crio-9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc WatchSource:0}: Error finding container 9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc: Status 404 returned error can't find the container with id 9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc Jan 30 21:34:19 crc kubenswrapper[4751]: W0130 21:34:19.859911 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd5051a_5be4_4336_af86_9674469b76a0.slice/crio-689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573 WatchSource:0}: Error finding container 689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573: Status 404 returned error can't find the container with id 689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573 Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.864505 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf"] Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.877843 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r"] Jan 30 21:34:19 crc kubenswrapper[4751]: I0130 21:34:19.878635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.878799 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:19 crc kubenswrapper[4751]: E0130 21:34:19.878862 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:21.878843794 +0000 UTC m=+1200.624666443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.111463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.120594 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.144560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.152657 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v"] Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.163793 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ad347ea_d2ce_4a1e_912a_8471445396f7.slice/crio-8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654 WatchSource:0}: Error finding container 8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654: Status 404 returned error can't find the container with id 8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654 Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.175520 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.181377 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.388271 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.388315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388467 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388512 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:22.388498847 +0000 UTC m=+1201.134321496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388796 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.388818 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:22.388811575 +0000 UTC m=+1201.134634224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.621530 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" event={"ID":"0b3a96d4-f5fc-47be-9c28-47239b2488c1","Type":"ContainerStarted","Data":"91b23c6544d6bed27d5b81117dea91eb3f50aa21e109430b6526723075cb41a3"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.622630 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" event={"ID":"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9","Type":"ContainerStarted","Data":"5caed7165150755b34445e56c40ef0252960893bc2b1e3802b4236f5e7a84ac9"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.623925 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" event={"ID":"9003ffe6-59a3-4c7c-96d0-d129a9339247","Type":"ContainerStarted","Data":"9329585d872defd623d97d0739b0dd49e03a2c1f7d6f7bce192b2b4edc0d9bbc"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.624819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" event={"ID":"9a88f139-89db-4b3a-8fea-bf951e59f564","Type":"ContainerStarted","Data":"7e327cfa635d28359b8acf63438541f92264d3c506a6bf8242ee4c751c8376ed"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.625877 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" event={"ID":"694b29bc-994c-4983-81c7-b32d47db553b","Type":"ContainerStarted","Data":"5dfb8991a4f5c5de4364e1cd9bccebd6806d738f0d0cc85cecc23600ec73f524"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.627352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" event={"ID":"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db","Type":"ContainerStarted","Data":"c8a1058cffeb24b2b928fa9fd7519b5d8864f41396346c06970ae212d20c48cc"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.628673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" event={"ID":"e596dcc9-7f31-4312-99e3-7d86d318ef9d","Type":"ContainerStarted","Data":"9a1ec0ead61cc5cd826574c047b6963a5dee629a4955684d0b231df84b4ca606"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.629723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" event={"ID":"1ad347ea-d2ce-4a1e-912a-8471445396f7","Type":"ContainerStarted","Data":"8c40aef27db5390a879bcf0125d770c1b581301fd2cfaa6fa313216470157654"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.630849 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" event={"ID":"0fd5051a-5be4-4336-af86-9674469b76a0","Type":"ContainerStarted","Data":"689fe8529e2dd72a517cf93755e7a0a7b52870527ec64ba468c8c4f7783b4573"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.631914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" event={"ID":"b2777bff-2cca-4f41-8655-a737f13b4885","Type":"ContainerStarted","Data":"a240e046249b286069b7324a2b9ef995899c6ea84f6ff40bca43281f3add3062"} Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.732976 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz"] Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.758934 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf49997_888f_4e58_99e7_f1f677dc7111.slice/crio-9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c WatchSource:0}: Error finding container 9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c: Status 404 returned error can't find the container with id 9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.760526 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.777797 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.790781 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-gcvgx"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.796968 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.808920 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.819620 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.826277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb"] Jan 30 21:34:20 crc kubenswrapper[4751]: I0130 21:34:20.833843 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6"] Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.842385 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda986231c_2119_4a13_801d_51119db5d365.slice/crio-193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908 WatchSource:0}: Error finding container 193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908: Status 404 returned error can't find the container with id 193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908 Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.843608 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9cc057_30d7_4a03_8c76_a1ca7200dbae.slice/crio-989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12 WatchSource:0}: Error finding container 989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12: Status 404 returned error can't find the container with id 989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12 Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.853351 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5p4vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v8vch_openstack-operators(a986231c-2119-4a13-801d-51119db5d365): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.853433 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbae5889_938b_4211_94a6_de960df2f95d.slice/crio-058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1 WatchSource:0}: Error finding container 058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1: Status 404 returned error can't find the container with id 058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1 Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.854471 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.855892 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l88tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-gcvgx_openstack-operators(cbae5889-938b-4211-94a6-de960df2f95d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.857387 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podUID="cbae5889-938b-4211-94a6-de960df2f95d" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.863249 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6d94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dx8wk_openstack-operators(ace28553-76bc-4472-a671-788e1fb9a1ff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.864383 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podUID="ace28553-76bc-4472-a671-788e1fb9a1ff" Jan 30 21:34:20 crc kubenswrapper[4751]: W0130 21:34:20.876302 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a416a7c_3094_46ef_8370_9cad7446339b.slice/crio-0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017 WatchSource:0}: Error finding container 0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017: Status 404 returned error can't find the container with id 0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017 Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.887297 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8fxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-9vvgb_openstack-operators(4a416a7c-3094-46ef-8370-9cad7446339b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 21:34:20 crc kubenswrapper[4751]: E0130 21:34:20.888930 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.658362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" event={"ID":"3d59cc79-1a37-434a-a04b-156739f469d7","Type":"ContainerStarted","Data":"4aea079f16bcf7802778f38af2a9756734fe7496a2f9ccf4969bdd2adab93087"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.660409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" event={"ID":"c711cf07-a695-447a-8d01-147b10e9059f","Type":"ContainerStarted","Data":"89940a894ad22cc1d199b4183fb29cc1f4c3fd68af2eeca7f427cd9a5c8691c3"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.661841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" event={"ID":"0c86abfd-77a9-4388-8b7f-b61bb378f7cb","Type":"ContainerStarted","Data":"ddfb2961f9e21ec6546a36edfdfd8a87e77634181bf1a823108169a8d3b1a2bc"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.664644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" event={"ID":"a986231c-2119-4a13-801d-51119db5d365","Type":"ContainerStarted","Data":"193e672f303b83e1d934fcbe9df43481356ac6434595e5751867078a1e21c908"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.665972 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.666763 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" event={"ID":"cbae5889-938b-4211-94a6-de960df2f95d","Type":"ContainerStarted","Data":"058882ebb92652797bf4059a5aaae63daa2c9f92ef617ccdca1030ce3c961fe1"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.670778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" event={"ID":"fcf49997-888f-4e58-99e7-f1f677dc7111","Type":"ContainerStarted","Data":"9b82bd9a8cb32cce1497f99de7e84e36df6e793134aabedb5e1080d55731e87c"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.671854 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podUID="cbae5889-938b-4211-94a6-de960df2f95d" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.671958 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" event={"ID":"4a416a7c-3094-46ef-8370-9cad7446339b","Type":"ContainerStarted","Data":"0094b7b3dfbc9734b6276513985162aa04d0ae49e6122a38568badfe07d38017"} Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.673395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" event={"ID":"3b9cc057-30d7-4a03-8c76-a1ca7200dbae","Type":"ContainerStarted","Data":"989f1c0433c76638fa747641936514fc38831afa8e383f773ba81a8ba115bd12"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.674496 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.680111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" event={"ID":"ace28553-76bc-4472-a671-788e1fb9a1ff","Type":"ContainerStarted","Data":"d8186f89b686033afc03bed94516f283c26dfaa0d4533f55008bedabb5350c3a"} Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.682627 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podUID="ace28553-76bc-4472-a671-788e1fb9a1ff" Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.726073 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.727702 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.727784 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:25.727765888 +0000 UTC m=+1204.473588537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:21 crc kubenswrapper[4751]: I0130 21:34:21.937267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.937715 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:21 crc kubenswrapper[4751]: E0130 21:34:21.937948 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:25.937931192 +0000 UTC m=+1204.683753841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: I0130 21:34:22.461610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:22 crc kubenswrapper[4751]: I0130 21:34:22.461925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.461778 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.462039 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:26.46202191 +0000 UTC m=+1205.207844559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.462119 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.462172 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:26.462157164 +0000 UTC m=+1205.207979813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.694792 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podUID="ace28553-76bc-4472-a671-788e1fb9a1ff" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.695085 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podUID="cbae5889-938b-4211-94a6-de960df2f95d" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.695121 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:22 crc kubenswrapper[4751]: E0130 21:34:22.695168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:22 crc kubenswrapper[4751]: I0130 21:34:22.799198 4751 scope.go:117] "RemoveContainer" containerID="a1a33ec969e1c6d383f8048ab15fcd257712831d15c58fa7001702dde20fdac5" Jan 30 21:34:23 crc kubenswrapper[4751]: I0130 21:34:23.045335 4751 scope.go:117] "RemoveContainer" containerID="cde0bba9b5bd705e79427c82f01801f8fb8f078a030c2d0c0c73c34abe57027a" Jan 30 21:34:25 crc kubenswrapper[4751]: I0130 21:34:25.827080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:25 crc kubenswrapper[4751]: E0130 21:34:25.827269 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:25 crc kubenswrapper[4751]: E0130 21:34:25.827597 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:33.827575884 +0000 UTC m=+1212.573398533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: I0130 21:34:26.030572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.030845 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.030903 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:34.030887005 +0000 UTC m=+1212.776709654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: I0130 21:34:26.538434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:26 crc kubenswrapper[4751]: I0130 21:34:26.538484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538620 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538635 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538697 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:34.538658717 +0000 UTC m=+1213.284481366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:26 crc kubenswrapper[4751]: E0130 21:34:26.538714 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:34.538707188 +0000 UTC m=+1213.284529837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.094680 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.095521 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqgfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-ph5lf_openstack-operators(f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.096783 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" podUID="f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.779655 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" podUID="f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9" Jan 30 21:34:33 crc kubenswrapper[4751]: I0130 21:34:33.885102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.885366 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:33 crc kubenswrapper[4751]: E0130 21:34:33.885476 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert podName:2d6f1acc-6416-44ae-9082-3ebe16dce448 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:49.885448409 +0000 UTC m=+1228.631271158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert") pod "infra-operator-controller-manager-79955696d6-52vr2" (UID: "2d6f1acc-6416-44ae-9082-3ebe16dce448") : secret "infra-operator-webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: I0130 21:34:34.088937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.090343 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.090549 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert podName:0026e471-8226-4038-8c52-f0add2877c8d nodeName:}" failed. No retries permitted until 2026-01-30 21:34:50.090536028 +0000 UTC m=+1228.836358677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" (UID: "0026e471-8226-4038-8c52-f0add2877c8d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: I0130 21:34:34.598042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:34 crc kubenswrapper[4751]: I0130 21:34:34.598088 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598308 4751 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598416 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:50.598396262 +0000 UTC m=+1229.344218991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "metrics-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.598481 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs podName:dac6f1f3-8549-488c-bb63-aa980f4a1282 nodeName:}" failed. No retries permitted until 2026-01-30 21:34:50.598466264 +0000 UTC m=+1229.344288913 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs") pod "openstack-operator-controller-manager-7d48698d88-jbmh6" (UID: "dac6f1f3-8549-488c-bb63-aa980f4a1282") : secret "webhook-server-cert" not found Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.657347 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.657558 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rccf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-6fg4r_openstack-operators(9003ffe6-59a3-4c7c-96d0-d129a9339247): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.659679 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" podUID="9003ffe6-59a3-4c7c-96d0-d129a9339247" Jan 30 21:34:34 crc kubenswrapper[4751]: E0130 21:34:34.783500 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" podUID="9003ffe6-59a3-4c7c-96d0-d129a9339247" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.674705 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.675151 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gn9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-7sk5v_openstack-operators(694b29bc-994c-4983-81c7-b32d47db553b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.676386 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" podUID="694b29bc-994c-4983-81c7-b32d47db553b" Jan 30 21:34:36 crc kubenswrapper[4751]: E0130 21:34:36.800269 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" podUID="694b29bc-994c-4983-81c7-b32d47db553b" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.146609 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.146780 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nbx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-jxkmf_openstack-operators(3fae5204-d3a1-4e39-ac3d-d28c8a55c7db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.148796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" podUID="3fae5204-d3a1-4e39-ac3d-d28c8a55c7db" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.685624 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.685894 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zc9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-hsbbr_openstack-operators(0b3a96d4-f5fc-47be-9c28-47239b2488c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.687108 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" podUID="0b3a96d4-f5fc-47be-9c28-47239b2488c1" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.809198 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" podUID="0b3a96d4-f5fc-47be-9c28-47239b2488c1" Jan 30 21:34:37 crc kubenswrapper[4751]: E0130 21:34:37.809635 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" podUID="3fae5204-d3a1-4e39-ac3d-d28c8a55c7db" Jan 30 21:34:41 crc kubenswrapper[4751]: E0130 21:34:41.964961 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 30 21:34:41 crc kubenswrapper[4751]: E0130 21:34:41.967231 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vxlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-d6slz_openstack-operators(fcf49997-888f-4e58-99e7-f1f677dc7111): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:41 crc kubenswrapper[4751]: E0130 21:34:41.968873 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" podUID="fcf49997-888f-4e58-99e7-f1f677dc7111" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.477457 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.478285 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcdvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-sc9gq_openstack-operators(3d59cc79-1a37-434a-a04b-156739f469d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.480744 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" podUID="3d59cc79-1a37-434a-a04b-156739f469d7" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.853167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" podUID="3d59cc79-1a37-434a-a04b-156739f469d7" Jan 30 21:34:42 crc kubenswrapper[4751]: E0130 21:34:42.853330 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" podUID="fcf49997-888f-4e58-99e7-f1f677dc7111" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.215760 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.216006 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxgxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-c7tj6_openstack-operators(c711cf07-a695-447a-8d01-147b10e9059f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.218423 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" podUID="c711cf07-a695-447a-8d01-147b10e9059f" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.810983 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.811426 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt5hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-r6smn_openstack-operators(0c86abfd-77a9-4388-8b7f-b61bb378f7cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.813118 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" podUID="0c86abfd-77a9-4388-8b7f-b61bb378f7cb" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.886875 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" podUID="0c86abfd-77a9-4388-8b7f-b61bb378f7cb" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.887153 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" podUID="c711cf07-a695-447a-8d01-147b10e9059f" Jan 30 21:34:44 crc kubenswrapper[4751]: I0130 21:34:44.887310 4751 scope.go:117] "RemoveContainer" containerID="4bac6aed72495d5a47025b1229e37fd0256684ee83fbbdb6b3d50f1e0a5fc0c5" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.893396 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.893579 4751 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.893770 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szjsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6749767b8f-62rqr_openstack-operators(3b9cc057-30d7-4a03-8c76-a1ca7200dbae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:44 crc kubenswrapper[4751]: E0130 21:34:44.895170 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" podUID="3b9cc057-30d7-4a03-8c76-a1ca7200dbae" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.566422 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.566811 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szp4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-tbp7n_openstack-operators(e596dcc9-7f31-4312-99e3-7d86d318ef9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.568636 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" podUID="e596dcc9-7f31-4312-99e3-7d86d318ef9d" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.893541 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" podUID="e596dcc9-7f31-4312-99e3-7d86d318ef9d" Jan 30 21:34:45 crc kubenswrapper[4751]: E0130 21:34:45.893608 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/openstack-k8s-operators/telemetry-operator:a5bcf05e2d71c610156d017fdf197f7c58570d79\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" podUID="3b9cc057-30d7-4a03-8c76-a1ca7200dbae" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.211405 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.211676 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-sw6zv_openstack-operators(b2777bff-2cca-4f41-8655-a737f13b4885): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.212968 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" podUID="b2777bff-2cca-4f41-8655-a737f13b4885" Jan 30 21:34:46 crc kubenswrapper[4751]: E0130 21:34:46.902237 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" podUID="b2777bff-2cca-4f41-8655-a737f13b4885" Jan 30 21:34:49 crc kubenswrapper[4751]: I0130 21:34:49.942329 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:49 crc kubenswrapper[4751]: I0130 21:34:49.955818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d6f1acc-6416-44ae-9082-3ebe16dce448-cert\") pod \"infra-operator-controller-manager-79955696d6-52vr2\" (UID: \"2d6f1acc-6416-44ae-9082-3ebe16dce448\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.027418 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p7wfp" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.036072 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.147728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.181254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0026e471-8226-4038-8c52-f0add2877c8d-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk\" (UID: \"0026e471-8226-4038-8c52-f0add2877c8d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.266908 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bp5hg" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.276033 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.656269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.656308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.662143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-webhook-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.664642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dac6f1f3-8549-488c-bb63-aa980f4a1282-metrics-certs\") pod \"openstack-operator-controller-manager-7d48698d88-jbmh6\" (UID: \"dac6f1f3-8549-488c-bb63-aa980f4a1282\") " pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.866262 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bxxmp" Jan 30 21:34:50 crc kubenswrapper[4751]: I0130 21:34:50.874481 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.005832 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.006280 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8fxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-9vvgb_openstack-operators(4a416a7c-3094-46ef-8370-9cad7446339b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.007519 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.426818 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.427234 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5p4vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v8vch_openstack-operators(a986231c-2119-4a13-801d-51119db5d365): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:34:53 crc kubenswrapper[4751]: E0130 21:34:53.429498 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.448888 4751 scope.go:117] "RemoveContainer" containerID="abe94b742d94eb174247c27e3a3c038f4045e5dcdb784f1f247f494e3ae1f48a" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.631219 4751 scope.go:117] "RemoveContainer" containerID="347f9ed747e483e16fb6ae1c645ea8f9e1e241d75612df7496d92124e040f3b2" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.713632 4751 scope.go:117] "RemoveContainer" containerID="00077efd881cb27326f6e85b8f3f194fe2c51b7a53178340a6cd81dc7d4c6583" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.973280 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" event={"ID":"236db419-e197-4a85-ab49-58cf38babea6","Type":"ContainerStarted","Data":"5fa5c830bfdd612600389bfea6674a544ce967add5e924a725be5ebb2885679a"} Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.973775 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:34:53 crc kubenswrapper[4751]: I0130 21:34:53.987480 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" podStartSLOduration=15.784407541 podStartE2EDuration="36.987466414s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.009041761 +0000 UTC m=+1197.754864410" lastFinishedPulling="2026-01-30 21:34:40.212100614 +0000 UTC m=+1218.957923283" observedRunningTime="2026-01-30 21:34:53.986896389 +0000 UTC m=+1232.732719038" watchObservedRunningTime="2026-01-30 21:34:53.987466414 +0000 UTC m=+1232.733289063" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.012334 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" event={"ID":"1ad347ea-d2ce-4a1e-912a-8471445396f7","Type":"ContainerStarted","Data":"2ed1c48780361b0da3291a463bb0b12eaf8d4b6a0628fa06712b48e9297a09e3"} Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.012390 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" event={"ID":"ace28553-76bc-4472-a671-788e1fb9a1ff","Type":"ContainerStarted","Data":"1b719fe4358ed9389812b21624b67e92bb6f60818e78767305a4e5d49e7097c3"} Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.012583 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.019556 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" podStartSLOduration=12.388039844 podStartE2EDuration="37.019533701s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.167129913 +0000 UTC m=+1198.912952562" lastFinishedPulling="2026-01-30 21:34:44.79862377 +0000 UTC m=+1223.544446419" observedRunningTime="2026-01-30 21:34:54.008981969 +0000 UTC m=+1232.754804618" watchObservedRunningTime="2026-01-30 21:34:54.019533701 +0000 UTC m=+1232.765356350" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.059744 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" podStartSLOduration=4.401760019 podStartE2EDuration="37.059726244s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.863163234 +0000 UTC m=+1199.608985883" lastFinishedPulling="2026-01-30 21:34:53.521129429 +0000 UTC m=+1232.266952108" observedRunningTime="2026-01-30 21:34:54.056900539 +0000 UTC m=+1232.802723188" watchObservedRunningTime="2026-01-30 21:34:54.059726244 +0000 UTC m=+1232.805548893" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.126575 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.126614 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.160073 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-52vr2"] Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.203795 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk"] Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.379260 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6"] Jan 30 21:34:54 crc kubenswrapper[4751]: W0130 21:34:54.394906 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac6f1f3_8549_488c_bb63_aa980f4a1282.slice/crio-ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3 WatchSource:0}: Error finding container ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3: Status 404 returned error can't find the container with id ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3 Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.992705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" event={"ID":"cbae5889-938b-4211-94a6-de960df2f95d","Type":"ContainerStarted","Data":"4b6ca8b0c876caf2fcdd17416cd135560c19ab71fb393357753836ea0497d737"} Jan 30 21:34:54 crc kubenswrapper[4751]: I0130 21:34:54.993242 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.011226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" event={"ID":"0026e471-8226-4038-8c52-f0add2877c8d","Type":"ContainerStarted","Data":"22bed683c9777fa2ec651abaa8b0249cd6ed9ce02d93cd2508d4bc79adc5c7b1"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.012198 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" event={"ID":"dac6f1f3-8549-488c-bb63-aa980f4a1282","Type":"ContainerStarted","Data":"ff2e1c06d22df6dfdb1a2b6299334dafee22973264e0e7f72550a4ae0320e0d3"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.012997 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" event={"ID":"2d6f1acc-6416-44ae-9082-3ebe16dce448","Type":"ContainerStarted","Data":"51a27cb253d3d55b9a1e7d0fae5bb0e213d638f3c401b29e1517770777f5c785"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.014152 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" event={"ID":"0fd5051a-5be4-4336-af86-9674469b76a0","Type":"ContainerStarted","Data":"4a4d622ebabe0baf4bd923e5a4affe3b44810af0cb669bd6e91087a654cca655"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.014315 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.018820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" event={"ID":"9a88f139-89db-4b3a-8fea-bf951e59f564","Type":"ContainerStarted","Data":"673f6fbf17424d40a3292cb90a3b3659b28fc30d1e56626b39acb557d110443e"} Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.018858 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.019202 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.106951 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" podStartSLOduration=4.441512902 podStartE2EDuration="37.106935966s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.855788087 +0000 UTC m=+1199.601610736" lastFinishedPulling="2026-01-30 21:34:53.521211151 +0000 UTC m=+1232.267033800" observedRunningTime="2026-01-30 21:34:55.077497219 +0000 UTC m=+1233.823319868" watchObservedRunningTime="2026-01-30 21:34:55.106935966 +0000 UTC m=+1233.852758605" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.138740 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" podStartSLOduration=11.810153458 podStartE2EDuration="38.138724894s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.864680145 +0000 UTC m=+1198.610502794" lastFinishedPulling="2026-01-30 21:34:46.193251531 +0000 UTC m=+1224.939074230" observedRunningTime="2026-01-30 21:34:55.137712038 +0000 UTC m=+1233.883534687" watchObservedRunningTime="2026-01-30 21:34:55.138724894 +0000 UTC m=+1233.884547543" Jan 30 21:34:55 crc kubenswrapper[4751]: I0130 21:34:55.151063 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" podStartSLOduration=12.087689122 podStartE2EDuration="38.151042074s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.129897099 +0000 UTC m=+1198.875719738" lastFinishedPulling="2026-01-30 21:34:46.193250041 +0000 UTC m=+1224.939072690" observedRunningTime="2026-01-30 21:34:55.11457265 +0000 UTC m=+1233.860395299" watchObservedRunningTime="2026-01-30 21:34:55.151042074 +0000 UTC m=+1233.896864723" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.035663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" event={"ID":"fcf49997-888f-4e58-99e7-f1f677dc7111","Type":"ContainerStarted","Data":"0065c8a882442cc833b09c16c3f19959d7b9b5ecfd1c352525ce1dac50a8453f"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.036253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.045557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" event={"ID":"0b3a96d4-f5fc-47be-9c28-47239b2488c1","Type":"ContainerStarted","Data":"d119097e01f5202f4300c2ff25ad3a798c72ae5270d64d63ff575555b33cae32"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.046383 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.051797 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" podStartSLOduration=4.118363621 podStartE2EDuration="39.051781472s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.771624129 +0000 UTC m=+1199.517446778" lastFinishedPulling="2026-01-30 21:34:55.70504198 +0000 UTC m=+1234.450864629" observedRunningTime="2026-01-30 21:34:56.049551112 +0000 UTC m=+1234.795373761" watchObservedRunningTime="2026-01-30 21:34:56.051781472 +0000 UTC m=+1234.797604121" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.056702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" event={"ID":"f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9","Type":"ContainerStarted","Data":"fbde8d72f1fc39b1f74c2d0bc1c2d5bf96862e1f7b1d06e8d530d9e4198b7b9d"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.057048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.064437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" event={"ID":"9003ffe6-59a3-4c7c-96d0-d129a9339247","Type":"ContainerStarted","Data":"13a8109b51a4c5e715db5d09987428acf7c9346c4a030fa77527c6ac71340c9b"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.064643 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.069776 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" podStartSLOduration=5.484966544 podStartE2EDuration="39.069764603s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.167907584 +0000 UTC m=+1198.913730233" lastFinishedPulling="2026-01-30 21:34:53.752705643 +0000 UTC m=+1232.498528292" observedRunningTime="2026-01-30 21:34:56.064942724 +0000 UTC m=+1234.810765393" watchObservedRunningTime="2026-01-30 21:34:56.069764603 +0000 UTC m=+1234.815587252" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.071785 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" event={"ID":"dac6f1f3-8549-488c-bb63-aa980f4a1282","Type":"ContainerStarted","Data":"62c7020bf65aa30bbe0811245bd0fca8a2ba912af669ab81357dc420c6fb354c"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.071957 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.073393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" event={"ID":"3d59cc79-1a37-434a-a04b-156739f469d7","Type":"ContainerStarted","Data":"562b62c113879705894b59c893522e12cf1dcd15b6b9b096b87d13907e1a9f19"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.073552 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.074623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" event={"ID":"694b29bc-994c-4983-81c7-b32d47db553b","Type":"ContainerStarted","Data":"98639b3d14f774696a05d9d16f0133b57465a837d6405d368795058b7e1a3ca7"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.074748 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.078836 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" event={"ID":"3fae5204-d3a1-4e39-ac3d-d28c8a55c7db","Type":"ContainerStarted","Data":"eeecfd4751c457ab23201cb5d49e83c31bfd47168c8de8e1e03dbb30ad11097e"} Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.079265 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.086297 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" podStartSLOduration=5.148898675 podStartE2EDuration="39.086277603s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.833484101 +0000 UTC m=+1198.579306750" lastFinishedPulling="2026-01-30 21:34:53.770863029 +0000 UTC m=+1232.516685678" observedRunningTime="2026-01-30 21:34:56.075380193 +0000 UTC m=+1234.821202842" watchObservedRunningTime="2026-01-30 21:34:56.086277603 +0000 UTC m=+1234.832100252" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.119092 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" podStartSLOduration=5.22692729 podStartE2EDuration="39.119073959s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.860572555 +0000 UTC m=+1198.606395204" lastFinishedPulling="2026-01-30 21:34:53.752719224 +0000 UTC m=+1232.498541873" observedRunningTime="2026-01-30 21:34:56.102142357 +0000 UTC m=+1234.847965006" watchObservedRunningTime="2026-01-30 21:34:56.119073959 +0000 UTC m=+1234.864896608" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.129412 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" podStartSLOduration=5.23742279 podStartE2EDuration="39.129393395s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:19.860814101 +0000 UTC m=+1198.606636750" lastFinishedPulling="2026-01-30 21:34:53.752784706 +0000 UTC m=+1232.498607355" observedRunningTime="2026-01-30 21:34:56.129381515 +0000 UTC m=+1234.875204164" watchObservedRunningTime="2026-01-30 21:34:56.129393395 +0000 UTC m=+1234.875216074" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.154426 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" podStartSLOduration=5.58441943 podStartE2EDuration="39.154408484s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.161239846 +0000 UTC m=+1198.907062495" lastFinishedPulling="2026-01-30 21:34:53.7312289 +0000 UTC m=+1232.477051549" observedRunningTime="2026-01-30 21:34:56.149545843 +0000 UTC m=+1234.895368492" watchObservedRunningTime="2026-01-30 21:34:56.154408484 +0000 UTC m=+1234.900231133" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.178577 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" podStartSLOduration=3.600743625 podStartE2EDuration="38.178559718s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.798740204 +0000 UTC m=+1199.544562853" lastFinishedPulling="2026-01-30 21:34:55.376556307 +0000 UTC m=+1234.122378946" observedRunningTime="2026-01-30 21:34:56.171389987 +0000 UTC m=+1234.917212646" watchObservedRunningTime="2026-01-30 21:34:56.178559718 +0000 UTC m=+1234.924382357" Jan 30 21:34:56 crc kubenswrapper[4751]: I0130 21:34:56.230808 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" podStartSLOduration=38.230793354 podStartE2EDuration="38.230793354s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:34:56.227787273 +0000 UTC m=+1234.973609922" watchObservedRunningTime="2026-01-30 21:34:56.230793354 +0000 UTC m=+1234.976616003" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.113928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" event={"ID":"0c86abfd-77a9-4388-8b7f-b61bb378f7cb","Type":"ContainerStarted","Data":"25c64994eab3654900b8462a41b5e82659584d507d33a0169bf1cf58a8c1563b"} Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.114352 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.127951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" event={"ID":"c711cf07-a695-447a-8d01-147b10e9059f","Type":"ContainerStarted","Data":"65ce6678db5c348ded66e26df03e88cf5cc9ea44e66f2bea0f2dd7ed64914148"} Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.129740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.137477 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" podStartSLOduration=4.541451722 podStartE2EDuration="40.137455261s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.799273658 +0000 UTC m=+1199.545096307" lastFinishedPulling="2026-01-30 21:34:56.395277187 +0000 UTC m=+1235.141099846" observedRunningTime="2026-01-30 21:34:57.130643968 +0000 UTC m=+1235.876466617" watchObservedRunningTime="2026-01-30 21:34:57.137455261 +0000 UTC m=+1235.883277910" Jan 30 21:34:57 crc kubenswrapper[4751]: I0130 21:34:57.154841 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" podStartSLOduration=4.4945907609999995 podStartE2EDuration="40.154822075s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.842943455 +0000 UTC m=+1199.588766104" lastFinishedPulling="2026-01-30 21:34:56.503174769 +0000 UTC m=+1235.248997418" observedRunningTime="2026-01-30 21:34:57.148701771 +0000 UTC m=+1235.894524440" watchObservedRunningTime="2026-01-30 21:34:57.154822075 +0000 UTC m=+1235.900644724" Jan 30 21:34:58 crc kubenswrapper[4751]: I0130 21:34:58.370572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-xk52h" Jan 30 21:34:58 crc kubenswrapper[4751]: I0130 21:34:58.613380 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dx8wk" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.153136 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" event={"ID":"b2777bff-2cca-4f41-8655-a737f13b4885","Type":"ContainerStarted","Data":"a5b059559b5215a322f41f19ccd90fde5e3bbbaf7eb96502655a3f4d2bccc703"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.154302 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.156086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" event={"ID":"0026e471-8226-4038-8c52-f0add2877c8d","Type":"ContainerStarted","Data":"36f6150b9b0b284192d5acf270cbd8ea3642994d684f14b41e618315a802d1c9"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.156532 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.158547 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" event={"ID":"e596dcc9-7f31-4312-99e3-7d86d318ef9d","Type":"ContainerStarted","Data":"6a1379304ff76647533441df6e1b27eb60488c7403124b9c15cc9b521c00747e"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.159076 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.161020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" event={"ID":"2d6f1acc-6416-44ae-9082-3ebe16dce448","Type":"ContainerStarted","Data":"0079251a734d5d58daf7ed7163417339edbacaef2022d9e5c0ed0507c4570a06"} Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.161443 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.169051 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" podStartSLOduration=3.8233985820000003 podStartE2EDuration="43.169033564s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.164947355 +0000 UTC m=+1198.910770004" lastFinishedPulling="2026-01-30 21:34:59.510582347 +0000 UTC m=+1238.256404986" observedRunningTime="2026-01-30 21:35:00.167868472 +0000 UTC m=+1238.913691131" watchObservedRunningTime="2026-01-30 21:35:00.169033564 +0000 UTC m=+1238.914856213" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.232936 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" podStartSLOduration=38.060495115 podStartE2EDuration="43.232914521s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:54.186977783 +0000 UTC m=+1232.932800432" lastFinishedPulling="2026-01-30 21:34:59.359397189 +0000 UTC m=+1238.105219838" observedRunningTime="2026-01-30 21:35:00.210480151 +0000 UTC m=+1238.956302800" watchObservedRunningTime="2026-01-30 21:35:00.232914521 +0000 UTC m=+1238.978737170" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.240271 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" podStartSLOduration=38.081949798 podStartE2EDuration="43.240248276s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:54.198686786 +0000 UTC m=+1232.944509435" lastFinishedPulling="2026-01-30 21:34:59.356985264 +0000 UTC m=+1238.102807913" observedRunningTime="2026-01-30 21:35:00.231363749 +0000 UTC m=+1238.977186408" watchObservedRunningTime="2026-01-30 21:35:00.240248276 +0000 UTC m=+1238.986070935" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.251544 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" podStartSLOduration=4.057083084 podStartE2EDuration="43.251521807s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.163792544 +0000 UTC m=+1198.909615193" lastFinishedPulling="2026-01-30 21:34:59.358231277 +0000 UTC m=+1238.104053916" observedRunningTime="2026-01-30 21:35:00.25086545 +0000 UTC m=+1238.996688099" watchObservedRunningTime="2026-01-30 21:35:00.251521807 +0000 UTC m=+1238.997344456" Jan 30 21:35:00 crc kubenswrapper[4751]: I0130 21:35:00.896249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d48698d88-jbmh6" Jan 30 21:35:01 crc kubenswrapper[4751]: I0130 21:35:01.168527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" event={"ID":"3b9cc057-30d7-4a03-8c76-a1ca7200dbae","Type":"ContainerStarted","Data":"4f33d3927639f134700295e7f1922aa32953c741e708de259922e5cf89fe6a65"} Jan 30 21:35:01 crc kubenswrapper[4751]: I0130 21:35:01.169167 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:35:01 crc kubenswrapper[4751]: I0130 21:35:01.192820 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" podStartSLOduration=4.017583158 podStartE2EDuration="43.192792638s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.851744569 +0000 UTC m=+1199.597567218" lastFinishedPulling="2026-01-30 21:35:00.026954049 +0000 UTC m=+1238.772776698" observedRunningTime="2026-01-30 21:35:01.186039608 +0000 UTC m=+1239.931862257" watchObservedRunningTime="2026-01-30 21:35:01.192792638 +0000 UTC m=+1239.938615307" Jan 30 21:35:03 crc kubenswrapper[4751]: E0130 21:35:03.977478 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podUID="4a416a7c-3094-46ef-8370-9cad7446339b" Jan 30 21:35:07 crc kubenswrapper[4751]: I0130 21:35:07.932865 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7mpjw" Jan 30 21:35:07 crc kubenswrapper[4751]: I0130 21:35:07.951270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6fg4r" Jan 30 21:35:07 crc kubenswrapper[4751]: I0130 21:35:07.969705 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ph5lf" Jan 30 21:35:07 crc kubenswrapper[4751]: E0130 21:35:07.977844 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podUID="a986231c-2119-4a13-801d-51119db5d365" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.014254 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-b65fl" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.042554 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jxkmf" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.119032 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-hsbbr" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.123757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-n2shb" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.331030 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-sw6zv" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.370964 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7sk5v" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.411008 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-tbp7n" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.456045 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-d6slz" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.593104 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-c7tj6" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.640963 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-r6smn" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.667429 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6749767b8f-62rqr" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.922254 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-sc9gq" Jan 30 21:35:08 crc kubenswrapper[4751]: I0130 21:35:08.984097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-gcvgx" Jan 30 21:35:10 crc kubenswrapper[4751]: I0130 21:35:10.044577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-52vr2" Jan 30 21:35:10 crc kubenswrapper[4751]: I0130 21:35:10.282576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk" Jan 30 21:35:19 crc kubenswrapper[4751]: I0130 21:35:19.357893 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" event={"ID":"4a416a7c-3094-46ef-8370-9cad7446339b","Type":"ContainerStarted","Data":"d4abf5f03de72ca9b0b1149ac692919f1fe1c16e4e240c5e153ff2b647dcc260"} Jan 30 21:35:19 crc kubenswrapper[4751]: I0130 21:35:19.358753 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:35:19 crc kubenswrapper[4751]: I0130 21:35:19.375409 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" podStartSLOduration=4.803639255 podStartE2EDuration="1m2.375391126s" podCreationTimestamp="2026-01-30 21:34:17 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.887185796 +0000 UTC m=+1199.633008445" lastFinishedPulling="2026-01-30 21:35:18.458937647 +0000 UTC m=+1257.204760316" observedRunningTime="2026-01-30 21:35:19.372069107 +0000 UTC m=+1258.117891826" watchObservedRunningTime="2026-01-30 21:35:19.375391126 +0000 UTC m=+1258.121213775" Jan 30 21:35:21 crc kubenswrapper[4751]: I0130 21:35:21.383245 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" event={"ID":"a986231c-2119-4a13-801d-51119db5d365","Type":"ContainerStarted","Data":"f1ebf2d10c1dabf042805319fd6922fa6222ef7dee3ae7cb0cab17ccb2932acc"} Jan 30 21:35:21 crc kubenswrapper[4751]: I0130 21:35:21.415218 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v8vch" podStartSLOduration=3.831972231 podStartE2EDuration="1m3.415183968s" podCreationTimestamp="2026-01-30 21:34:18 +0000 UTC" firstStartedPulling="2026-01-30 21:34:20.853225229 +0000 UTC m=+1199.599047878" lastFinishedPulling="2026-01-30 21:35:20.436436966 +0000 UTC m=+1259.182259615" observedRunningTime="2026-01-30 21:35:21.406398794 +0000 UTC m=+1260.152221463" watchObservedRunningTime="2026-01-30 21:35:21.415183968 +0000 UTC m=+1260.161006657" Jan 30 21:35:24 crc kubenswrapper[4751]: I0130 21:35:24.127237 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:35:24 crc kubenswrapper[4751]: I0130 21:35:24.128120 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:35:28 crc kubenswrapper[4751]: I0130 21:35:28.379544 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-9vvgb" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.909914 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.911802 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922599 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922730 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922849 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.922888 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-m7mgl" Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.929875 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.997063 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:45 crc kubenswrapper[4751]: I0130 21:35:45.998836 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.001899 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.022383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.038086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.038260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140380 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140570 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.140729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.142630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.164482 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"dnsmasq-dns-675f4bcbfc-8tkr2\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.242129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.242189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.242252 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.243109 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.243158 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.258983 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.260399 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"dnsmasq-dns-78dd6ddcc-5w7xw\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.325727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.729800 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:46 crc kubenswrapper[4751]: I0130 21:35:46.847248 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:47 crc kubenswrapper[4751]: I0130 21:35:47.460276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" event={"ID":"ef9f34c1-a280-43a3-a78b-6a10c2972759","Type":"ContainerStarted","Data":"db81d7e720a3f967946b93c7cdb416134679dd7e6418e2fc62f067e92c234fe4"} Jan 30 21:35:47 crc kubenswrapper[4751]: I0130 21:35:47.462006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" event={"ID":"a28e93be-b42f-4075-9092-349b11c825bb","Type":"ContainerStarted","Data":"93642ef2b0df57668df0e7cd91eb49b59436760eab9b26ed0b040d59521f1d2c"} Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.508232 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.537845 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.539440 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.569243 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.689783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.689867 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.690004 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.791682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.792041 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.792176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.793353 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.793366 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.812757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"dnsmasq-dns-666b6646f7-whdw4\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.822738 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.854472 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.856569 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.860300 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.872304 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.997106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.997169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:48 crc kubenswrapper[4751]: I0130 21:35:48.997255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.098629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.098686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.098787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.100340 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.100402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.120319 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"dnsmasq-dns-57d769cc4f-628lt\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.200665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.490119 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.669274 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:35:49 crc kubenswrapper[4751]: W0130 21:35:49.682911 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33135688_6f3e_426e_be2b_0e455d6736e6.slice/crio-5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db WatchSource:0}: Error finding container 5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db: Status 404 returned error can't find the container with id 5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.701197 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.703727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.709264 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.710447 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.710570 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.712054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qdzcx" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.712240 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.712370 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.715521 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.761498 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.767825 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.769582 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.774546 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.776808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.783712 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.794538 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.823641 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.823987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824187 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.824869 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.926986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927032 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927054 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927083 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927099 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927306 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927367 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927425 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927584 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927695 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927904 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.927999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928220 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928256 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928464 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928490 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928521 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928572 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.928670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.929128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.930650 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.931149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.932795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.933230 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.936060 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.936082 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4befba5b3452f215320be9365c178860d706182c1f41ab25a94828e6255d8c2/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.936824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.944780 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.950961 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:49 crc kubenswrapper[4751]: I0130 21:35:49.975486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " pod="openstack/rabbitmq-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:49.997893 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:49.999639 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.001812 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qvp6f" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002243 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002311 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002509 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.002652 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.003233 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.009807 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.028829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029876 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029914 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.029950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030094 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030135 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030200 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030266 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030342 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030389 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.030413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.031480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.033598 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.034982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.036393 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.036777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.037648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.038723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.039192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.039743 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.042892 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.043281 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.044786 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.044817 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38819e4ab89b59440f000d1a076c7489b3d13c82621db763cbf8d17a6b6689f4/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.047376 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.047436 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2001e391e04ee7d0edfbd20e4205f1b60c57288335d512357b8e0f2ce2f191a2/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.057785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.057899 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.058376 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.059193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.060627 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.064625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.066085 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.068437 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.087788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.099345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.136253 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.136748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137199 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137456 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.137624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239869 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.239977 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.240574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.241688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.244089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.244771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.245128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.247539 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.247574 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea75936dffe846fa8fe6e7d04e4555ffbed93863b04fcd828432921ea88ef24a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.248656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.249886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.250306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.257674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.282564 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.349305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.388072 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.403145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.427620 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.508873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" event={"ID":"d3d45f11-44b0-4b38-b308-c99c83e52e6b","Type":"ContainerStarted","Data":"3071dbc640f12657ce923f3e1023fb8d61a64a9e5353065a4040dc6a73df2531"} Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.515667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" event={"ID":"33135688-6f3e-426e-be2b-0e455d6736e6","Type":"ContainerStarted","Data":"5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db"} Jan 30 21:35:50 crc kubenswrapper[4751]: I0130 21:35:50.608026 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.051175 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.092890 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:35:51 crc kubenswrapper[4751]: W0130 21:35:51.149670 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod192a5913_0c28_4214_9ac0_d37ca2eeb34c.slice/crio-e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821 WatchSource:0}: Error finding container e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821: Status 404 returned error can't find the container with id e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821 Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.210702 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.408605 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.411404 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.420463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.429399 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4mml2" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.429584 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.429824 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.430886 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.449769 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.494970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495038 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495073 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-default\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kolla-config\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94k2l\" (UniqueName: \"kubernetes.io/projected/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kube-api-access-94k2l\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495205 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.495231 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.534469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerStarted","Data":"a7dc563e23807f6efe79faed84ec9c2b00f86190217519d5f3838b56a30401b8"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.537957 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerStarted","Data":"14b244ff165ab8225e2f7204427c69fbfcfd61b1331f0eb3d778a03cddbe88d2"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.539503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerStarted","Data":"e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.542312 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerStarted","Data":"f4f06c01fc35225b23f5f598399e00ef90da1d1a2d96b3cf839a507f64a8e8e3"} Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598238 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-default\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598350 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kolla-config\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598373 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94k2l\" (UniqueName: \"kubernetes.io/projected/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kube-api-access-94k2l\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598505 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.598905 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.599608 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-config-data-default\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.601221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kolla-config\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.609340 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55cd7e5-6799-4e1a-9f3b-a92937aca796-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.614495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.614822 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.614849 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1693b5ff48275fd8237f04db183f9f1c5204f43a559a540307ddb2f7e8d8c98a/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.629662 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94k2l\" (UniqueName: \"kubernetes.io/projected/d55cd7e5-6799-4e1a-9f3b-a92937aca796-kube-api-access-94k2l\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.646833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55cd7e5-6799-4e1a-9f3b-a92937aca796-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.663758 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9cb78143-74df-4a95-9869-ac578dee880c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cb78143-74df-4a95-9869-ac578dee880c\") pod \"openstack-galera-0\" (UID: \"d55cd7e5-6799-4e1a-9f3b-a92937aca796\") " pod="openstack/openstack-galera-0" Jan 30 21:35:51 crc kubenswrapper[4751]: I0130 21:35:51.761989 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.452533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.744847 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.747925 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.749720 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-sfwqq" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.750002 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.752103 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.752704 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.771097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826317 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826349 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826434 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2zvf\" (UniqueName: \"kubernetes.io/projected/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kube-api-access-h2zvf\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.826468 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928097 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928161 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2zvf\" (UniqueName: \"kubernetes.io/projected/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kube-api-access-h2zvf\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928194 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928222 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928336 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.928763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.929493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.930704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.931558 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.935792 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.935820 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ecc16afcfb08d3e294674b5aabb1d2111b51ff7527b2043b17eabcde6ded2a05/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.938239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.943443 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:52 crc kubenswrapper[4751]: I0130 21:35:52.960470 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2zvf\" (UniqueName: \"kubernetes.io/projected/a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32-kube-api-access-h2zvf\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.021614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d42993b1-8455-4142-bdec-aeabe96436cb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d42993b1-8455-4142-bdec-aeabe96436cb\") pod \"openstack-cell1-galera-0\" (UID: \"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32\") " pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.085405 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.106088 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.107271 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.113982 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6f59h" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.114046 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.114255 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.122090 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134703 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-combined-ca-bundle\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134749 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-config-data\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kolla-config\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-memcached-tls-certs\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.134933 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5n54\" (UniqueName: \"kubernetes.io/projected/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kube-api-access-x5n54\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5n54\" (UniqueName: \"kubernetes.io/projected/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kube-api-access-x5n54\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-combined-ca-bundle\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236774 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-config-data\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kolla-config\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.236891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-memcached-tls-certs\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.239435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kolla-config\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.240023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-config-data\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.259898 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-combined-ca-bundle\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.261605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-memcached-tls-certs\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.295007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5n54\" (UniqueName: \"kubernetes.io/projected/14c5f0f0-6d85-4d60-9daa-7fa3b401a884-kube-api-access-x5n54\") pod \"memcached-0\" (UID: \"14c5f0f0-6d85-4d60-9daa-7fa3b401a884\") " pod="openstack/memcached-0" Jan 30 21:35:53 crc kubenswrapper[4751]: I0130 21:35:53.451555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.129757 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130096 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130140 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130894 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.130943 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e" gracePeriod=600 Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.701178 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e" exitCode=0 Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.701235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e"} Jan 30 21:35:54 crc kubenswrapper[4751]: I0130 21:35:54.701266 4751 scope.go:117] "RemoveContainer" containerID="4084bd2e19ec539ac0bc075f3b6a34007de80a7e632827590212d241d8cb0234" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.244685 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.247689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.250838 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qch6t" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.280407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.312162 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"kube-state-metrics-0\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.415178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"kube-state-metrics-0\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.445853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"kube-state-metrics-0\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:35:55 crc kubenswrapper[4751]: I0130 21:35:55.607766 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.059305 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.061032 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.063747 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.063798 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-vlpbg" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.082118 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.130590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpm5\" (UniqueName: \"kubernetes.io/projected/0d7cf074-b623-45d0-ac84-c1e52a626885-kube-api-access-2fpm5\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.130636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7cf074-b623-45d0-ac84-c1e52a626885-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.231929 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpm5\" (UniqueName: \"kubernetes.io/projected/0d7cf074-b623-45d0-ac84-c1e52a626885-kube-api-access-2fpm5\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.231988 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7cf074-b623-45d0-ac84-c1e52a626885-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.252455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d7cf074-b623-45d0-ac84-c1e52a626885-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.258944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpm5\" (UniqueName: \"kubernetes.io/projected/0d7cf074-b623-45d0-ac84-c1e52a626885-kube-api-access-2fpm5\") pod \"observability-ui-dashboards-66cbf594b5-p97jc\" (UID: \"0d7cf074-b623-45d0-ac84-c1e52a626885\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.420925 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.446281 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.453458 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463362 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463522 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hjfsj" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463607 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463676 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463731 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.463754 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.468102 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.475601 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.491791 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-567c7bd4b5-dnfxs"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.493152 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.576178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.626662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567c7bd4b5-dnfxs"] Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647701 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647736 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647771 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-service-ca\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647791 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647810 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-oauth-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.647963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-oauth-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648006 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njg6\" (UniqueName: \"kubernetes.io/projected/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-kube-api-access-2njg6\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648483 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-trusted-ca-bundle\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648534 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.648556 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750723 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750827 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-service-ca\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.750966 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-oauth-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-oauth-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njg6\" (UniqueName: \"kubernetes.io/projected/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-kube-api-access-2njg6\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-trusted-ca-bundle\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.751114 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.756990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.757838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.758550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.758713 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.759128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-service-ca\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.759399 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.762826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-oauth-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.765176 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-oauth-config\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.765686 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.766781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.767074 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-console-serving-cert\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.768357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.769856 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.769927 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-trusted-ca-bundle\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.772829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.773170 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.773230 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a95dfeb2de129561acc13a0d8e1495cdeeea1e8a0c06c82206df350d4e35d0bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.797003 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njg6\" (UniqueName: \"kubernetes.io/projected/3d38bf98-a3aa-46b1-ac58-309f83d20bbb-kube-api-access-2njg6\") pod \"console-567c7bd4b5-dnfxs\" (UID: \"3d38bf98-a3aa-46b1-ac58-309f83d20bbb\") " pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.833090 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:56 crc kubenswrapper[4751]: I0130 21:35:56.869799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:35:57 crc kubenswrapper[4751]: I0130 21:35:57.078586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.268892 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g9s48"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.270354 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.272594 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.272791 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.273541 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zzwbs" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.288719 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.333824 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f4rx8"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.336180 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.350425 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f4rx8"] Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc382fd-1513-4137-b801-5627cc5886ea-scripts\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-log-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-ovn-controller-tls-certs\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387598 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-combined-ca-bundle\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.387645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gr6x\" (UniqueName: \"kubernetes.io/projected/fbc382fd-1513-4137-b801-5627cc5886ea-kube-api-access-2gr6x\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-run\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-log-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489177 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-ovn-controller-tls-certs\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-combined-ca-bundle\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489266 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gr6x\" (UniqueName: \"kubernetes.io/projected/fbc382fd-1513-4137-b801-5627cc5886ea-kube-api-access-2gr6x\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489290 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/071bab49-34f0-4fef-849e-c2530b4c423c-scripts\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489343 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-etc-ovs\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489409 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-log\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpzxj\" (UniqueName: \"kubernetes.io/projected/071bab49-34f0-4fef-849e-c2530b4c423c-kube-api-access-vpzxj\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-lib\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc382fd-1513-4137-b801-5627cc5886ea-scripts\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.489977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-log-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.491450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbc382fd-1513-4137-b801-5627cc5886ea-var-run-ovn\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.495028 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbc382fd-1513-4137-b801-5627cc5886ea-scripts\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.496095 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-ovn-controller-tls-certs\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.496883 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc382fd-1513-4137-b801-5627cc5886ea-combined-ca-bundle\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.504581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gr6x\" (UniqueName: \"kubernetes.io/projected/fbc382fd-1513-4137-b801-5627cc5886ea-kube-api-access-2gr6x\") pod \"ovn-controller-g9s48\" (UID: \"fbc382fd-1513-4137-b801-5627cc5886ea\") " pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590785 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-run\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/071bab49-34f0-4fef-849e-c2530b4c423c-scripts\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590888 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-etc-ovs\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590930 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-log\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590945 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpzxj\" (UniqueName: \"kubernetes.io/projected/071bab49-34f0-4fef-849e-c2530b4c423c-kube-api-access-vpzxj\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.590967 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-lib\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-lib\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591292 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-run\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-var-log\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.591474 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/071bab49-34f0-4fef-849e-c2530b4c423c-etc-ovs\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.593385 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/071bab49-34f0-4fef-849e-c2530b4c423c-scripts\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.594864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.626945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpzxj\" (UniqueName: \"kubernetes.io/projected/071bab49-34f0-4fef-849e-c2530b4c423c-kube-api-access-vpzxj\") pod \"ovn-controller-ovs-f4rx8\" (UID: \"071bab49-34f0-4fef-849e-c2530b4c423c\") " pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:58 crc kubenswrapper[4751]: I0130 21:35:58.654265 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:35:59 crc kubenswrapper[4751]: W0130 21:35:59.170654 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd55cd7e5_6799_4e1a_9f3b_a92937aca796.slice/crio-fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab WatchSource:0}: Error finding container fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab: Status 404 returned error can't find the container with id fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab Jan 30 21:35:59 crc kubenswrapper[4751]: I0130 21:35:59.772376 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerStarted","Data":"fcecc736431886bee24a53cf715abb5fcc3c8a76b85cf79ff3d524c5ded806ab"} Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.592812 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.598439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.601476 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qmtjb" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.602358 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.602866 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.602994 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.603095 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.609029 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.632899 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633158 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633253 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4xt\" (UniqueName: \"kubernetes.io/projected/1f8708be-4bf5-440d-a6e3-876acf844253-kube-api-access-cr4xt\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633290 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.633400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735209 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735289 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4xt\" (UniqueName: \"kubernetes.io/projected/1f8708be-4bf5-440d-a6e3-876acf844253-kube-api-access-cr4xt\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735320 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735382 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.735420 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.737053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.737364 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-config\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.738089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f8708be-4bf5-440d-a6e3-876acf844253-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.741823 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.749798 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.749846 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/417b1a1fd95e5f32699b4df2d9f46dae6df6c0c601710dee8734902dce1c54a9/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.753122 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.753201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4xt\" (UniqueName: \"kubernetes.io/projected/1f8708be-4bf5-440d-a6e3-876acf844253-kube-api-access-cr4xt\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.768729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8708be-4bf5-440d-a6e3-876acf844253-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.781345 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.784459 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.790074 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-788lt" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.790354 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.792754 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.793188 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.793674 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.829116 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b2a94e5-3eb6-4555-8d93-cc7723cb8e40\") pod \"ovsdbserver-sb-0\" (UID: \"1f8708be-4bf5-440d-a6e3-876acf844253\") " pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.936938 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938253 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938361 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zwk7\" (UniqueName: \"kubernetes.io/projected/47614a4a-f824-4eb4-9f46-bf1ab137d364-kube-api-access-9zwk7\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938411 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-config\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:02 crc kubenswrapper[4751]: I0130 21:36:02.938471 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040643 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040906 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zwk7\" (UniqueName: \"kubernetes.io/projected/47614a4a-f824-4eb4-9f46-bf1ab137d364-kube-api-access-9zwk7\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.040982 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-config\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.041654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.042110 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-config\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.042181 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47614a4a-f824-4eb4-9f46-bf1ab137d364-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.043022 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.043051 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f316b7d9fa0f84292f0c71966aa685e799b7597ab7ed0c55bf4b1d203e6cb9d/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.045502 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.045963 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.051362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47614a4a-f824-4eb4-9f46-bf1ab137d364-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.061252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zwk7\" (UniqueName: \"kubernetes.io/projected/47614a4a-f824-4eb4-9f46-bf1ab137d364-kube-api-access-9zwk7\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.080791 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa0940c-eaeb-4e90-bbe7-803e984794dd\") pod \"ovsdbserver-nb-0\" (UID: \"47614a4a-f824-4eb4-9f46-bf1ab137d364\") " pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:03 crc kubenswrapper[4751]: I0130 21:36:03.141006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.484994 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.485533 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rt94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f18b5d57-5b05-4ef0-bae3-68938e094510): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.489466 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.581078 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.581468 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hw2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(61d75daf-41cb-4ab5-b849-c98080ca748b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.582830 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.691029 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.691198 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqvcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(2ed6288f-1f28-4189-a452-10ed3fa78c7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:10 crc kubenswrapper[4751]: E0130 21:36:10.692430 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" Jan 30 21:36:17 crc kubenswrapper[4751]: I0130 21:36:17.189855 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567c7bd4b5-dnfxs"] Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.830887 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.831622 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvr5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-whdw4_openstack(d3d45f11-44b0-4b38-b308-c99c83e52e6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.832838 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" podUID="d3d45f11-44b0-4b38-b308-c99c83e52e6b" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.833504 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.833609 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgm5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-628lt_openstack(33135688-6f3e-426e-be2b-0e455d6736e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.834879 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.923514 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.923905 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5w7xw_openstack(ef9f34c1-a280-43a3-a78b-6a10c2972759): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.925725 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" podUID="ef9f34c1-a280-43a3-a78b-6a10c2972759" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.935846 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.935977 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8tkr2_openstack(a28e93be-b42f-4075-9092-349b11c825bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:36:17 crc kubenswrapper[4751]: E0130 21:36:17.939979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" podUID="a28e93be-b42f-4075-9092-349b11c825bb" Jan 30 21:36:18 crc kubenswrapper[4751]: E0130 21:36:18.022527 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" Jan 30 21:36:18 crc kubenswrapper[4751]: E0130 21:36:18.022806 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" podUID="d3d45f11-44b0-4b38-b308-c99c83e52e6b" Jan 30 21:36:18 crc kubenswrapper[4751]: I0130 21:36:18.101513 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567c7bd4b5-dnfxs" event={"ID":"3d38bf98-a3aa-46b1-ac58-309f83d20bbb","Type":"ContainerStarted","Data":"a537cc6d1a4f5b6006c4775c057a3275f3eb40ec990f4c8224cdc294744a4571"} Jan 30 21:36:18 crc kubenswrapper[4751]: I0130 21:36:18.994781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567c7bd4b5-dnfxs" event={"ID":"3d38bf98-a3aa-46b1-ac58-309f83d20bbb","Type":"ContainerStarted","Data":"82606cceefd5a69bcd58f56f2b7aa736bc30a7c8262640380b5522dd54432246"} Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.000837 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerStarted","Data":"b41336e87950b050089b8d0b576106edc16f6aafa733c3c6906a17f623e03fa0"} Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.005633 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88"} Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.006463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.016883 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.024274 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.034172 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-567c7bd4b5-dnfxs" podStartSLOduration=23.03415532 podStartE2EDuration="23.03415532s" podCreationTimestamp="2026-01-30 21:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:19.020797223 +0000 UTC m=+1317.766619872" watchObservedRunningTime="2026-01-30 21:36:19.03415532 +0000 UTC m=+1317.779977969" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.099055 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.220276 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc"] Jan 30 21:36:19 crc kubenswrapper[4751]: W0130 21:36:19.224419 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d7cf074_b623_45d0_ac84_c1e52a626885.slice/crio-b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02 WatchSource:0}: Error finding container b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02: Status 404 returned error can't find the container with id b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02 Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.386647 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.431818 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: W0130 21:36:19.590696 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56430b1_227c_4074_8d43_86953ab9f911.slice/crio-14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828 WatchSource:0}: Error finding container 14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828: Status 404 returned error can't find the container with id 14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828 Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.731968 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.733831 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.739652 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:36:19 crc kubenswrapper[4751]: W0130 21:36:19.800523 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8708be_4bf5_440d_a6e3_876acf844253.slice/crio-2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b WatchSource:0}: Error finding container 2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b: Status 404 returned error can't find the container with id 2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837118 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") pod \"a28e93be-b42f-4075-9092-349b11c825bb\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837189 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") pod \"a28e93be-b42f-4075-9092-349b11c825bb\" (UID: \"a28e93be-b42f-4075-9092-349b11c825bb\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837274 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") pod \"ef9f34c1-a280-43a3-a78b-6a10c2972759\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837397 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") pod \"ef9f34c1-a280-43a3-a78b-6a10c2972759\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837458 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") pod \"ef9f34c1-a280-43a3-a78b-6a10c2972759\" (UID: \"ef9f34c1-a280-43a3-a78b-6a10c2972759\") " Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config" (OuterVolumeSpecName: "config") pod "ef9f34c1-a280-43a3-a78b-6a10c2972759" (UID: "ef9f34c1-a280-43a3-a78b-6a10c2972759"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.837799 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config" (OuterVolumeSpecName: "config") pod "a28e93be-b42f-4075-9092-349b11c825bb" (UID: "a28e93be-b42f-4075-9092-349b11c825bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838082 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef9f34c1-a280-43a3-a78b-6a10c2972759" (UID: "ef9f34c1-a280-43a3-a78b-6a10c2972759"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838288 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838308 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28e93be-b42f-4075-9092-349b11c825bb-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.838317 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f34c1-a280-43a3-a78b-6a10c2972759-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.884606 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld" (OuterVolumeSpecName: "kube-api-access-pmqld") pod "a28e93be-b42f-4075-9092-349b11c825bb" (UID: "a28e93be-b42f-4075-9092-349b11c825bb"). InnerVolumeSpecName "kube-api-access-pmqld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.884924 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c" (OuterVolumeSpecName: "kube-api-access-bpc4c") pod "ef9f34c1-a280-43a3-a78b-6a10c2972759" (UID: "ef9f34c1-a280-43a3-a78b-6a10c2972759"). InnerVolumeSpecName "kube-api-access-bpc4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.939989 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpc4c\" (UniqueName: \"kubernetes.io/projected/ef9f34c1-a280-43a3-a78b-6a10c2972759-kube-api-access-bpc4c\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:19 crc kubenswrapper[4751]: I0130 21:36:19.940029 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqld\" (UniqueName: \"kubernetes.io/projected/a28e93be-b42f-4075-9092-349b11c825bb-kube-api-access-pmqld\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.037691 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f8708be-4bf5-440d-a6e3-876acf844253","Type":"ContainerStarted","Data":"2a1d79f1a4427b17d3048fce28839bbbeb15047fcb8e8523dd0b3a8f9060a86b"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.046288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"14c5f0f0-6d85-4d60-9daa-7fa3b401a884","Type":"ContainerStarted","Data":"69f55323b26a6cdd15f489b8bab8fa2c94d373221ee4208573c2dc948afeb570"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.048080 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" event={"ID":"0d7cf074-b623-45d0-ac84-c1e52a626885","Type":"ContainerStarted","Data":"b51f1d2fc02ae142d535c4c594ec2589285cd615fe0c9a7575d7f39eff05eb02"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.049194 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48" event={"ID":"fbc382fd-1513-4137-b801-5627cc5886ea","Type":"ContainerStarted","Data":"a6330356b450bf43fea6bed9b3391f5252032f25720121c5607581805f1db4ed"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.050495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" event={"ID":"a28e93be-b42f-4075-9092-349b11c825bb","Type":"ContainerDied","Data":"93642ef2b0df57668df0e7cd91eb49b59436760eab9b26ed0b040d59521f1d2c"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.050553 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8tkr2" Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.053185 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerStarted","Data":"cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.056575 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerStarted","Data":"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.058102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.060764 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" event={"ID":"ef9f34c1-a280-43a3-a78b-6a10c2972759","Type":"ContainerDied","Data":"db81d7e720a3f967946b93c7cdb416134679dd7e6418e2fc62f067e92c234fe4"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.060812 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5w7xw" Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.066543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerStarted","Data":"dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.071616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerStarted","Data":"3bb2d0d293bcca63ced4a6eec87e280101ac65a5555311aa13f1e064ca31af8e"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.092845 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerStarted","Data":"579631c272df8c432bd7df8c2f2c3693effbf544fdbdee73f85ac0888ded0450"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.092994 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerStarted","Data":"0e6eb1d3860c9b9375ea706f9f68de934a1efb81362d1bedbd70261ef0caab70"} Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.205157 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.215573 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5w7xw"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.255381 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.266158 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8tkr2"] Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.776711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 21:36:20 crc kubenswrapper[4751]: W0130 21:36:20.833501 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47614a4a_f824_4eb4_9f46_bf1ab137d364.slice/crio-44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098 WatchSource:0}: Error finding container 44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098: Status 404 returned error can't find the container with id 44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098 Jan 30 21:36:20 crc kubenswrapper[4751]: I0130 21:36:20.874290 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f4rx8"] Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.105257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47614a4a-f824-4eb4-9f46-bf1ab137d364","Type":"ContainerStarted","Data":"44dc5bc2f5c75e22d2cba7e160c581d67e6761e56e074418b1bb50f482c20098"} Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.107051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerStarted","Data":"6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9"} Jan 30 21:36:21 crc kubenswrapper[4751]: W0130 21:36:21.566559 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071bab49_34f0_4fef_849e_c2530b4c423c.slice/crio-68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3 WatchSource:0}: Error finding container 68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3: Status 404 returned error can't find the container with id 68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3 Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.996491 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28e93be-b42f-4075-9092-349b11c825bb" path="/var/lib/kubelet/pods/a28e93be-b42f-4075-9092-349b11c825bb/volumes" Jan 30 21:36:21 crc kubenswrapper[4751]: I0130 21:36:21.997343 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9f34c1-a280-43a3-a78b-6a10c2972759" path="/var/lib/kubelet/pods/ef9f34c1-a280-43a3-a78b-6a10c2972759/volumes" Jan 30 21:36:22 crc kubenswrapper[4751]: I0130 21:36:22.125857 4751 generic.go:334] "Generic (PLEG): container finished" podID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerID="b41336e87950b050089b8d0b576106edc16f6aafa733c3c6906a17f623e03fa0" exitCode=0 Jan 30 21:36:22 crc kubenswrapper[4751]: I0130 21:36:22.126016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerDied","Data":"b41336e87950b050089b8d0b576106edc16f6aafa733c3c6906a17f623e03fa0"} Jan 30 21:36:22 crc kubenswrapper[4751]: I0130 21:36:22.128429 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerStarted","Data":"68b8075a80c57b4dc37cb7c0152a8a45188bad145d4bb1bfb58ae42e790178f3"} Jan 30 21:36:24 crc kubenswrapper[4751]: I0130 21:36:24.152419 4751 generic.go:334] "Generic (PLEG): container finished" podID="a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32" containerID="579631c272df8c432bd7df8c2f2c3693effbf544fdbdee73f85ac0888ded0450" exitCode=0 Jan 30 21:36:24 crc kubenswrapper[4751]: I0130 21:36:24.152685 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerDied","Data":"579631c272df8c432bd7df8c2f2c3693effbf544fdbdee73f85ac0888ded0450"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.171542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" event={"ID":"0d7cf074-b623-45d0-ac84-c1e52a626885","Type":"ContainerStarted","Data":"567363d1a20e1743552dc1ae55168c90c044abbcc885a33caf9ff900c535d100"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.173110 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"14c5f0f0-6d85-4d60-9daa-7fa3b401a884","Type":"ContainerStarted","Data":"ef01dd13651cb8612a3cea1fc4418d761472f371dfd1a8a92ded40a74991eefb"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.173249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.175364 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerStarted","Data":"c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.175467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.176977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f8708be-4bf5-440d-a6e3-876acf844253","Type":"ContainerStarted","Data":"7032f4ae7198894a0049096b3a54cbb586bf347b9dfaf23c0a7ed0644c1a5952"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.178987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47614a4a-f824-4eb4-9f46-bf1ab137d364","Type":"ContainerStarted","Data":"3d099065e961e6db517f1df053408df8047fc234a2320c57772d3fb26b65c47e"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.183838 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32","Type":"ContainerStarted","Data":"0678b0f1f04488f41d09d2691fbeb7d1630138970ed74140f21c85b66d911f15"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.188611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d55cd7e5-6799-4e1a-9f3b-a92937aca796","Type":"ContainerStarted","Data":"3cbe6016f28e4a7af5409f18e256b5913f2a3067820734d5757c5766433f5586"} Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.203640 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-p97jc" podStartSLOduration=25.767538459 podStartE2EDuration="31.203613859s" podCreationTimestamp="2026-01-30 21:35:55 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.227171661 +0000 UTC m=+1317.972994310" lastFinishedPulling="2026-01-30 21:36:24.663247061 +0000 UTC m=+1323.409069710" observedRunningTime="2026-01-30 21:36:26.192188454 +0000 UTC m=+1324.938011153" watchObservedRunningTime="2026-01-30 21:36:26.203613859 +0000 UTC m=+1324.949436508" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.244957 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.244931082 podStartE2EDuration="35.244931082s" podCreationTimestamp="2026-01-30 21:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:26.229689905 +0000 UTC m=+1324.975512574" watchObservedRunningTime="2026-01-30 21:36:26.244931082 +0000 UTC m=+1324.990753751" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.283994 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.488186544 podStartE2EDuration="31.283977334s" podCreationTimestamp="2026-01-30 21:35:55 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.488242858 +0000 UTC m=+1318.234065507" lastFinishedPulling="2026-01-30 21:36:25.284033648 +0000 UTC m=+1324.029856297" observedRunningTime="2026-01-30 21:36:26.283612414 +0000 UTC m=+1325.029435063" watchObservedRunningTime="2026-01-30 21:36:26.283977334 +0000 UTC m=+1325.029799983" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.284697 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.128779157 podStartE2EDuration="33.284684622s" podCreationTimestamp="2026-01-30 21:35:53 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.098692972 +0000 UTC m=+1317.844515621" lastFinishedPulling="2026-01-30 21:36:24.254598437 +0000 UTC m=+1323.000421086" observedRunningTime="2026-01-30 21:36:26.262214833 +0000 UTC m=+1325.008037482" watchObservedRunningTime="2026-01-30 21:36:26.284684622 +0000 UTC m=+1325.030507271" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.326696 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.589310552 podStartE2EDuration="36.326672193s" podCreationTimestamp="2026-01-30 21:35:50 +0000 UTC" firstStartedPulling="2026-01-30 21:35:59.173312737 +0000 UTC m=+1297.919135386" lastFinishedPulling="2026-01-30 21:36:17.910674378 +0000 UTC m=+1316.656497027" observedRunningTime="2026-01-30 21:36:26.31378747 +0000 UTC m=+1325.059610119" watchObservedRunningTime="2026-01-30 21:36:26.326672193 +0000 UTC m=+1325.072494842" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.870752 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.872189 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:26 crc kubenswrapper[4751]: I0130 21:36:26.878827 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.198205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48" event={"ID":"fbc382fd-1513-4137-b801-5627cc5886ea","Type":"ContainerStarted","Data":"bef48e968c54383f22b1749cb484521a899cbe28a538a43a6d252f3eb1f25a25"} Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.198583 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-g9s48" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.200310 4751 generic.go:334] "Generic (PLEG): container finished" podID="071bab49-34f0-4fef-849e-c2530b4c423c" containerID="ffa2573ac7dc5b9bd71b4e537625e7c9a8c61dbd83a867ec7dfaee0cf9f5eb00" exitCode=0 Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.200474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerDied","Data":"ffa2573ac7dc5b9bd71b4e537625e7c9a8c61dbd83a867ec7dfaee0cf9f5eb00"} Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.205132 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-567c7bd4b5-dnfxs" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.217858 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g9s48" podStartSLOduration=23.147907578 podStartE2EDuration="29.217838935s" podCreationTimestamp="2026-01-30 21:35:58 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.099045391 +0000 UTC m=+1317.844868040" lastFinishedPulling="2026-01-30 21:36:25.168976748 +0000 UTC m=+1323.914799397" observedRunningTime="2026-01-30 21:36:27.215220935 +0000 UTC m=+1325.961043614" watchObservedRunningTime="2026-01-30 21:36:27.217838935 +0000 UTC m=+1325.963661584" Jan 30 21:36:27 crc kubenswrapper[4751]: I0130 21:36:27.284719 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.209614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1f8708be-4bf5-440d-a6e3-876acf844253","Type":"ContainerStarted","Data":"ce0bb679d4e618dfb33a5cd9fdea7bfd89b4de261b65a7141d120dab35183b8a"} Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.213030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47614a4a-f824-4eb4-9f46-bf1ab137d364","Type":"ContainerStarted","Data":"285fed604b34fa9ce63507a50b19e78645ad9d9b08e00bc6030cd353a6955aa7"} Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.216222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerStarted","Data":"718efc89487f0308768571832008d2392b357dc679fe87f9ac770fad001e8f1c"} Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.240952 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.541837156 podStartE2EDuration="27.240936259s" podCreationTimestamp="2026-01-30 21:36:01 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.803747948 +0000 UTC m=+1318.549570597" lastFinishedPulling="2026-01-30 21:36:27.502847051 +0000 UTC m=+1326.248669700" observedRunningTime="2026-01-30 21:36:28.235405542 +0000 UTC m=+1326.981228241" watchObservedRunningTime="2026-01-30 21:36:28.240936259 +0000 UTC m=+1326.986758898" Jan 30 21:36:28 crc kubenswrapper[4751]: I0130 21:36:28.267482 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.603566559 podStartE2EDuration="27.267461057s" podCreationTimestamp="2026-01-30 21:36:01 +0000 UTC" firstStartedPulling="2026-01-30 21:36:20.835850281 +0000 UTC m=+1319.581672930" lastFinishedPulling="2026-01-30 21:36:27.499744769 +0000 UTC m=+1326.245567428" observedRunningTime="2026-01-30 21:36:28.260133141 +0000 UTC m=+1327.005955780" watchObservedRunningTime="2026-01-30 21:36:28.267461057 +0000 UTC m=+1327.013283706" Jan 30 21:36:28 crc kubenswrapper[4751]: E0130 21:36:28.932832 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:48648->38.102.83.39:41127: write tcp 38.102.83.39:48648->38.102.83.39:41127: write: broken pipe Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.233512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f4rx8" event={"ID":"071bab49-34f0-4fef-849e-c2530b4c423c","Type":"ContainerStarted","Data":"378619f77f6f87c9795145256b5000917b55722ce6e850bbc4ec0ff90843608b"} Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.234303 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.234348 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.236923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f"} Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.257912 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f4rx8" podStartSLOduration=27.656064988 podStartE2EDuration="31.257895748s" podCreationTimestamp="2026-01-30 21:35:58 +0000 UTC" firstStartedPulling="2026-01-30 21:36:21.568678868 +0000 UTC m=+1320.314501517" lastFinishedPulling="2026-01-30 21:36:25.170509588 +0000 UTC m=+1323.916332277" observedRunningTime="2026-01-30 21:36:29.252214517 +0000 UTC m=+1327.998037166" watchObservedRunningTime="2026-01-30 21:36:29.257895748 +0000 UTC m=+1328.003718397" Jan 30 21:36:29 crc kubenswrapper[4751]: I0130 21:36:29.937888 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.010374 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.141894 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.208012 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.244695 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.244774 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.308759 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.635747 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.703335 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.705164 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.714678 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.722834 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.734896 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6bddb"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.736306 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.740575 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.759515 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bddb"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.784833 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785092 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovs-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60cf673-3513-4af6-ac72-280908e95405-config\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785279 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovn-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785465 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-combined-ca-bundle\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.785651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cd44\" (UniqueName: \"kubernetes.io/projected/e60cf673-3513-4af6-ac72-280908e95405-kube-api-access-8cd44\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.871945 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-combined-ca-bundle\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889504 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cd44\" (UniqueName: \"kubernetes.io/projected/e60cf673-3513-4af6-ac72-280908e95405-kube-api-access-8cd44\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889538 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovs-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60cf673-3513-4af6-ac72-280908e95405-config\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovn-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.889748 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovn-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890639 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e60cf673-3513-4af6-ac72-280908e95405-ovs-rundir\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890737 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60cf673-3513-4af6-ac72-280908e95405-config\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.890854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.891283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.896674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-combined-ca-bundle\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.900919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e60cf673-3513-4af6-ac72-280908e95405-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.924632 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.926260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.929529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"dnsmasq-dns-5bf47b49b7-gcttq\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.937633 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.939166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cd44\" (UniqueName: \"kubernetes.io/projected/e60cf673-3513-4af6-ac72-280908e95405-kube-api-access-8cd44\") pod \"ovn-controller-metrics-6bddb\" (UID: \"e60cf673-3513-4af6-ac72-280908e95405\") " pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.968409 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993228 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993303 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:30 crc kubenswrapper[4751]: I0130 21:36:30.993427 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.031631 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.061177 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bddb" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095441 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095514 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095587 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.095686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.096685 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.097772 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.097794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.097951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.125880 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"dnsmasq-dns-8554648995-pl94b\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.274981 4751 generic.go:334] "Generic (PLEG): container finished" podID="33135688-6f3e-426e-be2b-0e455d6736e6" containerID="9c47ecbe42a77dd1b41021da0eb6f61ea6d658a2d38393fa7a8f216c5d640c6d" exitCode=0 Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.278125 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" event={"ID":"33135688-6f3e-426e-be2b-0e455d6736e6","Type":"ContainerDied","Data":"9c47ecbe42a77dd1b41021da0eb6f61ea6d658a2d38393fa7a8f216c5d640c6d"} Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.294799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.298529 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.355614 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.420566 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") pod \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.420688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") pod \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.420820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") pod \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\" (UID: \"d3d45f11-44b0-4b38-b308-c99c83e52e6b\") " Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.422337 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3d45f11-44b0-4b38-b308-c99c83e52e6b" (UID: "d3d45f11-44b0-4b38-b308-c99c83e52e6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.423675 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config" (OuterVolumeSpecName: "config") pod "d3d45f11-44b0-4b38-b308-c99c83e52e6b" (UID: "d3d45f11-44b0-4b38-b308-c99c83e52e6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.427510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f" (OuterVolumeSpecName: "kube-api-access-gvr5f") pod "d3d45f11-44b0-4b38-b308-c99c83e52e6b" (UID: "d3d45f11-44b0-4b38-b308-c99c83e52e6b"). InnerVolumeSpecName "kube-api-access-gvr5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.526845 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.526879 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvr5f\" (UniqueName: \"kubernetes.io/projected/d3d45f11-44b0-4b38-b308-c99c83e52e6b-kube-api-access-gvr5f\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.526892 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d45f11-44b0-4b38-b308-c99c83e52e6b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.559251 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.561040 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564196 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-st2jm" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564318 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564530 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.564628 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.574122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.628656 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629005 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwkz\" (UniqueName: \"kubernetes.io/projected/f31a7def-755f-49e8-bf97-7e155bcc5113-kube-api-access-ctwkz\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629107 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-scripts\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-config\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629213 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.629289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731131 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-scripts\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-config\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731270 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731364 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwkz\" (UniqueName: \"kubernetes.io/projected/f31a7def-755f-49e8-bf97-7e155bcc5113-kube-api-access-ctwkz\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.731968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.732618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-scripts\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.732926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31a7def-755f-49e8-bf97-7e155bcc5113-config\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.736350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.736430 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.737354 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31a7def-755f-49e8-bf97-7e155bcc5113-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.760066 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwkz\" (UniqueName: \"kubernetes.io/projected/f31a7def-755f-49e8-bf97-7e155bcc5113-kube-api-access-ctwkz\") pod \"ovn-northd-0\" (UID: \"f31a7def-755f-49e8-bf97-7e155bcc5113\") " pod="openstack/ovn-northd-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.764738 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.770282 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.875530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 21:36:31 crc kubenswrapper[4751]: I0130 21:36:31.883687 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.072041 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.074026 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.094085 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bddb"] Jan 30 21:36:32 crc kubenswrapper[4751]: W0130 21:36:32.107759 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode60cf673_3513_4af6_ac72_280908e95405.slice/crio-f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50 WatchSource:0}: Error finding container f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50: Status 404 returned error can't find the container with id f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50 Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.163033 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") pod \"33135688-6f3e-426e-be2b-0e455d6736e6\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.163379 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") pod \"33135688-6f3e-426e-be2b-0e455d6736e6\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.163525 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") pod \"33135688-6f3e-426e-be2b-0e455d6736e6\" (UID: \"33135688-6f3e-426e-be2b-0e455d6736e6\") " Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.168126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m" (OuterVolumeSpecName: "kube-api-access-fgm5m") pod "33135688-6f3e-426e-be2b-0e455d6736e6" (UID: "33135688-6f3e-426e-be2b-0e455d6736e6"). InnerVolumeSpecName "kube-api-access-fgm5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.200050 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.201954 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config" (OuterVolumeSpecName: "config") pod "33135688-6f3e-426e-be2b-0e455d6736e6" (UID: "33135688-6f3e-426e-be2b-0e455d6736e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.209205 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33135688-6f3e-426e-be2b-0e455d6736e6" (UID: "33135688-6f3e-426e-be2b-0e455d6736e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:32 crc kubenswrapper[4751]: W0130 21:36:32.211053 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa6dba67_6d0e_4b49_b9dd_0905f6ffe809.slice/crio-5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9 WatchSource:0}: Error finding container 5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9: Status 404 returned error can't find the container with id 5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9 Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.266814 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgm5m\" (UniqueName: \"kubernetes.io/projected/33135688-6f3e-426e-be2b-0e455d6736e6-kube-api-access-fgm5m\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.266844 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.266853 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33135688-6f3e-426e-be2b-0e455d6736e6-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.293162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bddb" event={"ID":"e60cf673-3513-4af6-ac72-280908e95405","Type":"ContainerStarted","Data":"f2b60fe877dbe4646eb60fc38c2dcbb637f0aa9c85e2ee1bd4066b815cddcc50"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.295701 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerStarted","Data":"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.295764 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerStarted","Data":"55b6cba15b1dde6f627ad66f8e7ca8bf6ccd049b3a97fa354a6cb717078364af"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.302117 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.302167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-whdw4" event={"ID":"d3d45f11-44b0-4b38-b308-c99c83e52e6b","Type":"ContainerDied","Data":"3071dbc640f12657ce923f3e1023fb8d61a64a9e5353065a4040dc6a73df2531"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.304468 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerStarted","Data":"5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.307148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" event={"ID":"33135688-6f3e-426e-be2b-0e455d6736e6","Type":"ContainerDied","Data":"5b26c9f9622d5f37dabd6fb741797aade48188fd9c3b092f168e67b8d44a96db"} Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.307184 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-628lt" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.307188 4751 scope.go:117] "RemoveContainer" containerID="9c47ecbe42a77dd1b41021da0eb6f61ea6d658a2d38393fa7a8f216c5d640c6d" Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.433830 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.444644 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-whdw4"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.473609 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.483163 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-628lt"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.494896 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 21:36:32 crc kubenswrapper[4751]: I0130 21:36:32.504241 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.047835 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:36:33 crc kubenswrapper[4751]: E0130 21:36:33.048728 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" containerName="init" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.048747 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" containerName="init" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.049028 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" containerName="init" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.050004 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.053992 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.060146 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.086358 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.086485 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.098175 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.100665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.115887 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.173033 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195923 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.195981 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.298115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.298163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.299010 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.299172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.299468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.311058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.319795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"keystone-1da7-account-create-update-q9cg8\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.328757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"keystone-db-create-hgg7b\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.330636 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.331786 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.373237 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.374608 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerID="f5e405ed39cb57c7e634de9365462e74ee99a3051cc26eb21d0da11ce6b70e82" exitCode=0 Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.376041 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.376074 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerDied","Data":"f5e405ed39cb57c7e634de9365462e74ee99a3051cc26eb21d0da11ce6b70e82"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.393222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bddb" event={"ID":"e60cf673-3513-4af6-ac72-280908e95405","Type":"ContainerStarted","Data":"dfa0cc5e0e1a00048e5540825072f7736fb9b3f30a105d5fc4fb8fda5077dfc3"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.399309 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f31a7def-755f-49e8-bf97-7e155bcc5113","Type":"ContainerStarted","Data":"83bf62a32f5d5bd7adb294af5aaaa53cab4f2669572bd0f972b3ecfa96d0be73"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.401486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.401678 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.406182 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f39785c-2919-4c29-8405-fd314710c587" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" exitCode=0 Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.407987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerDied","Data":"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605"} Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.418421 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.466618 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.501951 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.503656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.503697 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.503952 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.507009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.510429 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.521173 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.535855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"placement-db-create-mxcnd\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.557515 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.563407 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6bddb" podStartSLOduration=3.563387487 podStartE2EDuration="3.563387487s" podCreationTimestamp="2026-01-30 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:33.409574673 +0000 UTC m=+1332.155397322" watchObservedRunningTime="2026-01-30 21:36:33.563387487 +0000 UTC m=+1332.309210136" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.605485 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.605559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.665207 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.666658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.683418 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707337 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.707530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.708739 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.723072 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"placement-fed1-account-create-update-ztdkt\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.766351 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.767795 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.777796 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.779590 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.805952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.809955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.810867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.827855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"glance-db-create-7tt6b\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.892379 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.911501 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.911550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.912316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.925753 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"glance-a004-account-create-update-zkpzg\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.985532 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.985651 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33135688-6f3e-426e-be2b-0e455d6736e6" path="/var/lib/kubelet/pods/33135688-6f3e-426e-be2b-0e455d6736e6/volumes" Jan 30 21:36:33 crc kubenswrapper[4751]: I0130 21:36:33.986180 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d45f11-44b0-4b38-b308-c99c83e52e6b" path="/var/lib/kubelet/pods/d3d45f11-44b0-4b38-b308-c99c83e52e6b/volumes" Jan 30 21:36:34 crc kubenswrapper[4751]: I0130 21:36:34.101353 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.441921 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.443570 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.480309 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.560463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.560614 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.577690 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.618657 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.620600 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.649590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.649647 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.662826 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.662922 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.664312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.695465 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.696898 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.701465 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.709913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"mysqld-exporter-openstack-db-create-p9lfn\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766596 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.766690 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.775394 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.787583 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870230 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870293 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.870722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.871357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.871747 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.871754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.872134 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.910128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"dnsmasq-dns-b8fbc5445-4dbml\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.971783 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.979364 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.979571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.980274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:35 crc kubenswrapper[4751]: I0130 21:36:35.998912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"mysqld-exporter-dd31-account-create-update-4hlqb\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.084007 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.647121 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.653666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.655864 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.655877 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.656169 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2gfcw" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.656635 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.687787 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.735925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.742191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.747640 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.747847 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.748070 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.770526 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.771298 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-4mznf ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-4mznf ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-nkwvf" podUID="91b9a8dc-b59e-4e4c-832b-494faad41261" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.779451 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vvq25"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.780801 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795756 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-cache\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9dw\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-kube-api-access-2q9dw\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795882 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.795971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-lock\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.802687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vvq25"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.831386 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898598 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.898654 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.898677 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:36 crc kubenswrapper[4751]: E0130 21:36:36.898722 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:37.398704996 +0000 UTC m=+1336.144527645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898918 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-cache\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.898972 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q9dw\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-kube-api-access-2q9dw\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899167 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899272 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899383 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899504 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-cache\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-lock\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899732 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899766 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899791 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.899805 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-lock\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.905675 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.905703 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8ae021dec1593a0996c8ad3e7a0be16c58e24389d91041271a46023fface37c6/globalmount\"" pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.907754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.917522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q9dw\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-kube-api-access-2q9dw\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:36 crc kubenswrapper[4751]: I0130 21:36:36.960847 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15aa0dd8-3a54-4cb5-aa28-c1ef970c7d80\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001300 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001383 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001692 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.001728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.002569 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.002949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.003477 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.003729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.005011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.005344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007086 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007414 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007494 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.007564 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.008127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.010655 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.019519 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"swift-ring-rebalance-nkwvf\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.027542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"swift-ring-rebalance-vvq25\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.103147 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.409678 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:37 crc kubenswrapper[4751]: E0130 21:36:37.409902 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:37 crc kubenswrapper[4751]: E0130 21:36:37.409937 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:37 crc kubenswrapper[4751]: E0130 21:36:37.410007 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:38.409984721 +0000 UTC m=+1337.155807380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.443694 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.466613 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.613521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.613839 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.613960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.614255 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.614701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.615436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.615929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") pod \"91b9a8dc-b59e-4e4c-832b-494faad41261\" (UID: \"91b9a8dc-b59e-4e4c-832b-494faad41261\") " Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.614340 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.615370 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts" (OuterVolumeSpecName: "scripts") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.616473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617471 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617574 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617585 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91b9a8dc-b59e-4e4c-832b-494faad41261-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617628 4751 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91b9a8dc-b59e-4e4c-832b-494faad41261-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.617634 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.619880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.627480 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf" (OuterVolumeSpecName: "kube-api-access-4mznf") pod "91b9a8dc-b59e-4e4c-832b-494faad41261" (UID: "91b9a8dc-b59e-4e4c-832b-494faad41261"). InnerVolumeSpecName "kube-api-access-4mznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719458 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719492 4751 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719501 4751 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91b9a8dc-b59e-4e4c-832b-494faad41261-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:37 crc kubenswrapper[4751]: I0130 21:36:37.719509 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mznf\" (UniqueName: \"kubernetes.io/projected/91b9a8dc-b59e-4e4c-832b-494faad41261-kube-api-access-4mznf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.434823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:38 crc kubenswrapper[4751]: E0130 21:36:38.435253 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:38 crc kubenswrapper[4751]: E0130 21:36:38.435409 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:38 crc kubenswrapper[4751]: E0130 21:36:38.435462 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:40.435445457 +0000 UTC m=+1339.181268116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.453145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkwvf" Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.501127 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:38 crc kubenswrapper[4751]: I0130 21:36:38.509737 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nkwvf"] Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.466005 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f" exitCode=0 Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.466122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f"} Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.965080 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:36:39 crc kubenswrapper[4751]: I0130 21:36:39.997948 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b9a8dc-b59e-4e4c-832b-494faad41261" path="/var/lib/kubelet/pods/91b9a8dc-b59e-4e4c-832b-494faad41261/volumes" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.367632 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.373724 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.375731 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.387504 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.478945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerStarted","Data":"6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.480381 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.487150 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerID="3d408c3254750e92d426d8cded49880995124a210d5a1b2ed7f46112cc91e938" exitCode=0 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.487270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mxcnd" event={"ID":"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5","Type":"ContainerDied","Data":"3d408c3254750e92d426d8cded49880995124a210d5a1b2ed7f46112cc91e938"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.487291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mxcnd" event={"ID":"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5","Type":"ContainerStarted","Data":"80ee5941bdfee40d36adda8b22fe45b35867b32b4e232e517bcd95751ece2d05"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.489070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f31a7def-755f-49e8-bf97-7e155bcc5113","Type":"ContainerStarted","Data":"04a156d706153c93639a688901d350f0328c9c9d4da2ad561e9e390f6d44d74b"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.489170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f31a7def-755f-49e8-bf97-7e155bcc5113","Type":"ContainerStarted","Data":"cae3385f7099b9769debf5ffa6a014862e4a10fa087c1d2c65a218585f72a8f6"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.491187 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.495117 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.495204 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.495372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: E0130 21:36:40.495586 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:40 crc kubenswrapper[4751]: E0130 21:36:40.495612 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:40 crc kubenswrapper[4751]: E0130 21:36:40.495673 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:44.495659077 +0000 UTC m=+1343.241481726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.497524 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerStarted","Data":"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd"} Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.497666 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" containerID="cri-o://b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" gracePeriod=10 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.497760 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.526078 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-pl94b" podStartSLOduration=10.526058919 podStartE2EDuration="10.526058919s" podCreationTimestamp="2026-01-30 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:40.501526694 +0000 UTC m=+1339.247349343" watchObservedRunningTime="2026-01-30 21:36:40.526058919 +0000 UTC m=+1339.271881568" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.558071 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.42751228 podStartE2EDuration="9.558048342s" podCreationTimestamp="2026-01-30 21:36:31 +0000 UTC" firstStartedPulling="2026-01-30 21:36:32.473099721 +0000 UTC m=+1331.218922370" lastFinishedPulling="2026-01-30 21:36:39.603635773 +0000 UTC m=+1338.349458432" observedRunningTime="2026-01-30 21:36:40.529400328 +0000 UTC m=+1339.275222977" watchObservedRunningTime="2026-01-30 21:36:40.558048342 +0000 UTC m=+1339.303870991" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.570554 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" podStartSLOduration=10.570536736 podStartE2EDuration="10.570536736s" podCreationTimestamp="2026-01-30 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:40.547354428 +0000 UTC m=+1339.293177077" watchObservedRunningTime="2026-01-30 21:36:40.570536736 +0000 UTC m=+1339.316359385" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.596557 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.596684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.598118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.620049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"root-account-create-update-8xlxv\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.745619 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.756422 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.762835 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc6a33_d240_4982_ade1_668f5da8b516.slice/crio-697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035 WatchSource:0}: Error finding container 697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035: Status 404 returned error can't find the container with id 697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035 Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.763419 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722402dd_bf51_47a6_b20e_85aec93527d9.slice/crio-e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48 WatchSource:0}: Error finding container e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48: Status 404 returned error can't find the container with id e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.768448 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.776961 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37f348fb_7f83_40db_98b2_7e8bc603a3e6.slice/crio-df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31 WatchSource:0}: Error finding container df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31: Status 404 returned error can't find the container with id df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31 Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.783360 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vvq25"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.798347 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.807654 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.822652 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.824701 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70af95fb_5ca8_4482_a1bc_81b1891e0da7.slice/crio-68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8 WatchSource:0}: Error finding container 68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8: Status 404 returned error can't find the container with id 68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8 Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.837497 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93341dcd_a293_4879_8baf_855556383780.slice/crio-d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e WatchSource:0}: Error finding container d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e: Status 404 returned error can't find the container with id d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.866707 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.878583 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:36:40 crc kubenswrapper[4751]: I0130 21:36:40.896451 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.903458 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1373e37_3653_4f5d_9978_9d1cca4e546b.slice/crio-4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995 WatchSource:0}: Error finding container 4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995: Status 404 returned error can't find the container with id 4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995 Jan 30 21:36:40 crc kubenswrapper[4751]: W0130 21:36:40.906948 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a5d5df_fe19_49f0_b82a_afbe70b4c9f2.slice/crio-c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf WatchSource:0}: Error finding container c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf: Status 404 returned error can't find the container with id c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.157405 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309317 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309740 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309778 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.309884 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") pod \"6f39785c-2919-4c29-8405-fd314710c587\" (UID: \"6f39785c-2919-4c29-8405-fd314710c587\") " Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.323566 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf" (OuterVolumeSpecName: "kube-api-access-kr2tf") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "kube-api-access-kr2tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.414807 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr2tf\" (UniqueName: \"kubernetes.io/projected/6f39785c-2919-4c29-8405-fd314710c587-kube-api-access-kr2tf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.544313 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.563491 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.580984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.603447 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerStarted","Data":"d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.603495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerStarted","Data":"e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.632356 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.632377 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.633466 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f39785c-2919-4c29-8405-fd314710c587" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" exitCode=0 Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.633631 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.633647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerDied","Data":"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.634679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" event={"ID":"6f39785c-2919-4c29-8405-fd314710c587","Type":"ContainerDied","Data":"55b6cba15b1dde6f627ad66f8e7ca8bf6ccd049b3a97fa354a6cb717078364af"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.634724 4751 scope.go:117] "RemoveContainer" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.644585 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-1da7-account-create-update-q9cg8" podStartSLOduration=8.644553228 podStartE2EDuration="8.644553228s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.632935468 +0000 UTC m=+1340.378758117" watchObservedRunningTime="2026-01-30 21:36:41.644553228 +0000 UTC m=+1340.390375877" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.646603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerStarted","Data":"1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.646647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerStarted","Data":"df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.662988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerStarted","Data":"73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.663050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerStarted","Data":"ac159ae2cb6976ef8122c35a06fe61ae9b29a654dcff59cb32ef375cbdebcd34"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.669121 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a004-account-create-update-zkpzg" podStartSLOduration=8.669104333 podStartE2EDuration="8.669104333s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.663223756 +0000 UTC m=+1340.409046405" watchObservedRunningTime="2026-01-30 21:36:41.669104333 +0000 UTC m=+1340.414926982" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.672433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hgg7b" event={"ID":"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2","Type":"ContainerStarted","Data":"c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.686719 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" podStartSLOduration=6.686703312 podStartE2EDuration="6.686703312s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.683095346 +0000 UTC m=+1340.428917995" watchObservedRunningTime="2026-01-30 21:36:41.686703312 +0000 UTC m=+1340.432525961" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.697139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerStarted","Data":"327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.697247 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerStarted","Data":"4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.706923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerStarted","Data":"c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.706987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerStarted","Data":"d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.707589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config" (OuterVolumeSpecName: "config") pod "6f39785c-2919-4c29-8405-fd314710c587" (UID: "6f39785c-2919-4c29-8405-fd314710c587"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.712673 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-hgg7b" podStartSLOduration=8.712652105 podStartE2EDuration="8.712652105s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.707455597 +0000 UTC m=+1340.453278246" watchObservedRunningTime="2026-01-30 21:36:41.712652105 +0000 UTC m=+1340.458474744" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.718838 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerStarted","Data":"68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.729265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerStarted","Data":"ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.729301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerStarted","Data":"91fac1793a7a2b8a269edafca995d78c1aceb7914291bdd22c295ca0ed226b45"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.729350 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-7tt6b" podStartSLOduration=8.729309869 podStartE2EDuration="8.729309869s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.728238641 +0000 UTC m=+1340.474061290" watchObservedRunningTime="2026-01-30 21:36:41.729309869 +0000 UTC m=+1340.475132518" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.735440 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39785c-2919-4c29-8405-fd314710c587-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.772375 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerStarted","Data":"963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.772554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerStarted","Data":"697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035"} Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.817444 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-fed1-account-create-update-ztdkt" podStartSLOduration=8.817425921 podStartE2EDuration="8.817425921s" podCreationTimestamp="2026-01-30 21:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.767642723 +0000 UTC m=+1340.513465372" watchObservedRunningTime="2026-01-30 21:36:41.817425921 +0000 UTC m=+1340.563248570" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.865339 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" podStartSLOduration=6.865299059 podStartE2EDuration="6.865299059s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:41.804025093 +0000 UTC m=+1340.549847742" watchObservedRunningTime="2026-01-30 21:36:41.865299059 +0000 UTC m=+1340.611121728" Jan 30 21:36:41 crc kubenswrapper[4751]: I0130 21:36:41.935539 4751 scope.go:117] "RemoveContainer" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" Jan 30 21:36:42 crc kubenswrapper[4751]: E0130 21:36:42.130869 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722402dd_bf51_47a6_b20e_85aec93527d9.slice/crio-conmon-d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod722402dd_bf51_47a6_b20e_85aec93527d9.slice/crio-d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb683b6d_9110_46e1_8406_eea86d9cc73b.slice/crio-ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc6a33_d240_4982_ade1_668f5da8b516.slice/crio-963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb683b6d_9110_46e1_8406_eea86d9cc73b.slice/crio-conmon-ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc6a33_d240_4982_ade1_668f5da8b516.slice/crio-conmon-963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55099194_6cb2_437d_ae0d_a08c104de380.slice/crio-73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.304216 4751 scope.go:117] "RemoveContainer" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" Jan 30 21:36:42 crc kubenswrapper[4751]: E0130 21:36:42.305073 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd\": container with ID starting with b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd not found: ID does not exist" containerID="b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.305102 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd"} err="failed to get container status \"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd\": rpc error: code = NotFound desc = could not find container \"b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd\": container with ID starting with b6b3c4d303d05b0a0f708157ec8513f5dfe4b966ad2d29989c2873f44e7cbabd not found: ID does not exist" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.305121 4751 scope.go:117] "RemoveContainer" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" Jan 30 21:36:42 crc kubenswrapper[4751]: E0130 21:36:42.305681 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605\": container with ID starting with ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605 not found: ID does not exist" containerID="ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.305704 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605"} err="failed to get container status \"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605\": rpc error: code = NotFound desc = could not find container \"ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605\": container with ID starting with ceb05bc26ceb74ab33848c8dc2ae7e47ba1d5123056e72a37cc4e8a9b93ad605 not found: ID does not exist" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.335076 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.461897 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") pod \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.461951 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") pod \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\" (UID: \"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5\") " Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.463772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" (UID: "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.486078 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc" (OuterVolumeSpecName: "kube-api-access-pxklc") pod "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" (UID: "ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5"). InnerVolumeSpecName "kube-api-access-pxklc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.564808 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxklc\" (UniqueName: \"kubernetes.io/projected/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-kube-api-access-pxklc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.564842 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.785937 4751 generic.go:334] "Generic (PLEG): container finished" podID="55099194-6cb2-437d-ae0d-a08c104de380" containerID="73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.786264 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerDied","Data":"73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.794970 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerID="ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.795020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerDied","Data":"ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.795042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerStarted","Data":"d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.802525 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.816989 4751 generic.go:334] "Generic (PLEG): container finished" podID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerID="47ae5041500feb907ed9d9736f2e4bbce3e444b85130301585ffd13ba081d9a9" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.817059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hgg7b" event={"ID":"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2","Type":"ContainerDied","Data":"47ae5041500feb907ed9d9736f2e4bbce3e444b85130301585ffd13ba081d9a9"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.822181 4751 generic.go:334] "Generic (PLEG): container finished" podID="7be55860-0016-49cf-9505-9692dd9ccd36" containerID="ded685defb3526390eca5f7cb2d53cfb12497b060a9cc1ce297a52cc7244f151" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.822362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xlxv" event={"ID":"7be55860-0016-49cf-9505-9692dd9ccd36","Type":"ContainerDied","Data":"ded685defb3526390eca5f7cb2d53cfb12497b060a9cc1ce297a52cc7244f151"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.822384 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xlxv" event={"ID":"7be55860-0016-49cf-9505-9692dd9ccd36","Type":"ContainerStarted","Data":"414a9d0ca0d8ab7d602fb4a81109d4833ae86e6bbc20c6fb24a116f28a92d0b4"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.825434 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerID="963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.825478 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerDied","Data":"963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.825839 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" podStartSLOduration=7.825818922 podStartE2EDuration="7.825818922s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:42.820523151 +0000 UTC m=+1341.566345800" watchObservedRunningTime="2026-01-30 21:36:42.825818922 +0000 UTC m=+1341.571641581" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.827221 4751 generic.go:334] "Generic (PLEG): container finished" podID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerID="1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.827255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerDied","Data":"1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.828635 4751 generic.go:334] "Generic (PLEG): container finished" podID="722402dd-bf51-47a6-b20e-85aec93527d9" containerID="d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.828671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerDied","Data":"d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.830144 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mxcnd" event={"ID":"ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5","Type":"ContainerDied","Data":"80ee5941bdfee40d36adda8b22fe45b35867b32b4e232e517bcd95751ece2d05"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.830166 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ee5941bdfee40d36adda8b22fe45b35867b32b4e232e517bcd95751ece2d05" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.830229 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mxcnd" Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.843035 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerID="327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.843107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerDied","Data":"327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64"} Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.848135 4751 generic.go:334] "Generic (PLEG): container finished" podID="93341dcd-a293-4879-8baf-855556383780" containerID="c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601" exitCode=0 Jan 30 21:36:42 crc kubenswrapper[4751]: I0130 21:36:42.848223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerDied","Data":"c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601"} Jan 30 21:36:44 crc kubenswrapper[4751]: I0130 21:36:44.537377 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:44 crc kubenswrapper[4751]: E0130 21:36:44.537524 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:44 crc kubenswrapper[4751]: E0130 21:36:44.538037 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:44 crc kubenswrapper[4751]: E0130 21:36:44.538095 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:36:52.538076106 +0000 UTC m=+1351.283898755 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:46 crc kubenswrapper[4751]: I0130 21:36:46.297506 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:48 crc kubenswrapper[4751]: I0130 21:36:48.926567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" event={"ID":"4fbc6a33-d240-4982-ade1-668f5da8b516","Type":"ContainerDied","Data":"697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035"} Jan 30 21:36:48 crc kubenswrapper[4751]: I0130 21:36:48.927001 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697aedf9ffbdda8a06dbe5ac5680879f0f4a2aad04f2d5ce719596367a25a035" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.165745 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.199935 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.207267 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.232794 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.250976 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.266609 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.279263 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.298058 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.350692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") pod \"722402dd-bf51-47a6-b20e-85aec93527d9\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351001 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") pod \"55099194-6cb2-437d-ae0d-a08c104de380\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") pod \"722402dd-bf51-47a6-b20e-85aec93527d9\" (UID: \"722402dd-bf51-47a6-b20e-85aec93527d9\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351097 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") pod \"7be55860-0016-49cf-9505-9692dd9ccd36\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") pod \"55099194-6cb2-437d-ae0d-a08c104de380\" (UID: \"55099194-6cb2-437d-ae0d-a08c104de380\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") pod \"93341dcd-a293-4879-8baf-855556383780\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351227 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") pod \"7be55860-0016-49cf-9505-9692dd9ccd36\" (UID: \"7be55860-0016-49cf-9505-9692dd9ccd36\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351297 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") pod \"93341dcd-a293-4879-8baf-855556383780\" (UID: \"93341dcd-a293-4879-8baf-855556383780\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") pod \"4fbc6a33-d240-4982-ade1-668f5da8b516\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351373 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") pod \"4fbc6a33-d240-4982-ade1-668f5da8b516\" (UID: \"4fbc6a33-d240-4982-ade1-668f5da8b516\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351751 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55099194-6cb2-437d-ae0d-a08c104de380" (UID: "55099194-6cb2-437d-ae0d-a08c104de380"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.351907 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55099194-6cb2-437d-ae0d-a08c104de380-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.352449 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "722402dd-bf51-47a6-b20e-85aec93527d9" (UID: "722402dd-bf51-47a6-b20e-85aec93527d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.352577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7be55860-0016-49cf-9505-9692dd9ccd36" (UID: "7be55860-0016-49cf-9505-9692dd9ccd36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.353047 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fbc6a33-d240-4982-ade1-668f5da8b516" (UID: "4fbc6a33-d240-4982-ade1-668f5da8b516"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.354061 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93341dcd-a293-4879-8baf-855556383780" (UID: "93341dcd-a293-4879-8baf-855556383780"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.356843 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl" (OuterVolumeSpecName: "kube-api-access-glgbl") pod "7be55860-0016-49cf-9505-9692dd9ccd36" (UID: "7be55860-0016-49cf-9505-9692dd9ccd36"). InnerVolumeSpecName "kube-api-access-glgbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.358727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m" (OuterVolumeSpecName: "kube-api-access-9x92m") pod "93341dcd-a293-4879-8baf-855556383780" (UID: "93341dcd-a293-4879-8baf-855556383780"). InnerVolumeSpecName "kube-api-access-9x92m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.359309 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp" (OuterVolumeSpecName: "kube-api-access-phkvp") pod "4fbc6a33-d240-4982-ade1-668f5da8b516" (UID: "4fbc6a33-d240-4982-ade1-668f5da8b516"). InnerVolumeSpecName "kube-api-access-phkvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.359651 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db" (OuterVolumeSpecName: "kube-api-access-zj7db") pod "722402dd-bf51-47a6-b20e-85aec93527d9" (UID: "722402dd-bf51-47a6-b20e-85aec93527d9"). InnerVolumeSpecName "kube-api-access-zj7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.359678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l" (OuterVolumeSpecName: "kube-api-access-6t95l") pod "55099194-6cb2-437d-ae0d-a08c104de380" (UID: "55099194-6cb2-437d-ae0d-a08c104de380"). InnerVolumeSpecName "kube-api-access-6t95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.452994 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") pod \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") pod \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\" (UID: \"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453363 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") pod \"d1373e37-3653-4f5d-9978-9d1cca4e546b\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453409 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") pod \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453474 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") pod \"d1373e37-3653-4f5d-9978-9d1cca4e546b\" (UID: \"d1373e37-3653-4f5d-9978-9d1cca4e546b\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453502 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") pod \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\" (UID: \"37f348fb-7f83-40db-98b2-7e8bc603a3e6\") " Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453557 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" (UID: "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.453837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1373e37-3653-4f5d-9978-9d1cca4e546b" (UID: "d1373e37-3653-4f5d-9978-9d1cca4e546b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.454165 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37f348fb-7f83-40db-98b2-7e8bc603a3e6" (UID: "37f348fb-7f83-40db-98b2-7e8bc603a3e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.454989 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj7db\" (UniqueName: \"kubernetes.io/projected/722402dd-bf51-47a6-b20e-85aec93527d9-kube-api-access-zj7db\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455024 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1373e37-3653-4f5d-9978-9d1cca4e546b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455044 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t95l\" (UniqueName: \"kubernetes.io/projected/55099194-6cb2-437d-ae0d-a08c104de380-kube-api-access-6t95l\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455062 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/722402dd-bf51-47a6-b20e-85aec93527d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455083 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37f348fb-7f83-40db-98b2-7e8bc603a3e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455103 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be55860-0016-49cf-9505-9692dd9ccd36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455120 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x92m\" (UniqueName: \"kubernetes.io/projected/93341dcd-a293-4879-8baf-855556383780-kube-api-access-9x92m\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455137 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgbl\" (UniqueName: \"kubernetes.io/projected/7be55860-0016-49cf-9505-9692dd9ccd36-kube-api-access-glgbl\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455156 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455173 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93341dcd-a293-4879-8baf-855556383780-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455191 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fbc6a33-d240-4982-ade1-668f5da8b516-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.455209 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phkvp\" (UniqueName: \"kubernetes.io/projected/4fbc6a33-d240-4982-ade1-668f5da8b516-kube-api-access-phkvp\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.456074 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl" (OuterVolumeSpecName: "kube-api-access-mphdl") pod "37f348fb-7f83-40db-98b2-7e8bc603a3e6" (UID: "37f348fb-7f83-40db-98b2-7e8bc603a3e6"). InnerVolumeSpecName "kube-api-access-mphdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.457502 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs" (OuterVolumeSpecName: "kube-api-access-9vdvs") pod "d1373e37-3653-4f5d-9978-9d1cca4e546b" (UID: "d1373e37-3653-4f5d-9978-9d1cca4e546b"). InnerVolumeSpecName "kube-api-access-9vdvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.457848 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll" (OuterVolumeSpecName: "kube-api-access-9hpll") pod "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" (UID: "e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2"). InnerVolumeSpecName "kube-api-access-9hpll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.556544 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hpll\" (UniqueName: \"kubernetes.io/projected/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2-kube-api-access-9hpll\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.556576 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vdvs\" (UniqueName: \"kubernetes.io/projected/d1373e37-3653-4f5d-9978-9d1cca4e546b-kube-api-access-9vdvs\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.556588 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphdl\" (UniqueName: \"kubernetes.io/projected/37f348fb-7f83-40db-98b2-7e8bc603a3e6-kube-api-access-mphdl\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.936360 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8xlxv" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.936381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8xlxv" event={"ID":"7be55860-0016-49cf-9505-9692dd9ccd36","Type":"ContainerDied","Data":"414a9d0ca0d8ab7d602fb4a81109d4833ae86e6bbc20c6fb24a116f28a92d0b4"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.936415 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414a9d0ca0d8ab7d602fb4a81109d4833ae86e6bbc20c6fb24a116f28a92d0b4" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.938686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fed1-account-create-update-ztdkt" event={"ID":"d1373e37-3653-4f5d-9978-9d1cca4e546b","Type":"ContainerDied","Data":"4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.938727 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fed1-account-create-update-ztdkt" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.938744 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ada26d1b2093244b4841da5d56a5cb27cf06118eae54dce88a395277f8ba995" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.940754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7tt6b" event={"ID":"93341dcd-a293-4879-8baf-855556383780","Type":"ContainerDied","Data":"d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.940777 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2cd1c9c8696f97faaf38f91732e913b1ec957c1fea1e4ffc25d93f7701a0b4e" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.940778 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7tt6b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.942663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" event={"ID":"55099194-6cb2-437d-ae0d-a08c104de380","Type":"ContainerDied","Data":"ac159ae2cb6976ef8122c35a06fe61ae9b29a654dcff59cb32ef375cbdebcd34"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.942696 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac159ae2cb6976ef8122c35a06fe61ae9b29a654dcff59cb32ef375cbdebcd34" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.942747 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-dd31-account-create-update-4hlqb" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.951388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerStarted","Data":"d406940fb6742e9578d31d784c5dc7b728af135cada6bb76ae850b8c64dbd1f2"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.956119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1da7-account-create-update-q9cg8" event={"ID":"722402dd-bf51-47a6-b20e-85aec93527d9","Type":"ContainerDied","Data":"e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.956143 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1da7-account-create-update-q9cg8" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.956153 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12b2d3c1c414c985bea152833a4e7437f3747bfd05adf3b5739d215a1fadf48" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.958387 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a004-account-create-update-zkpzg" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.958407 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a004-account-create-update-zkpzg" event={"ID":"37f348fb-7f83-40db-98b2-7e8bc603a3e6","Type":"ContainerDied","Data":"df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.958438 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2d8a98804dd8716d1b278672e379bd608478eb951fa38ce8e97562ba876f31" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.961667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.964999 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-p9lfn" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.965010 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hgg7b" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.964989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hgg7b" event={"ID":"e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2","Type":"ContainerDied","Data":"c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf"} Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.965280 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c773f4baaf7dd922c65d6ad7b794d9c6a5b5a5d6d85daf6f5c9bf7f785bbaedf" Jan 30 21:36:49 crc kubenswrapper[4751]: I0130 21:36:49.983475 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vvq25" podStartSLOduration=5.778478441 podStartE2EDuration="13.983459006s" podCreationTimestamp="2026-01-30 21:36:36 +0000 UTC" firstStartedPulling="2026-01-30 21:36:40.83002564 +0000 UTC m=+1339.575848289" lastFinishedPulling="2026-01-30 21:36:49.035006205 +0000 UTC m=+1347.780828854" observedRunningTime="2026-01-30 21:36:49.972038472 +0000 UTC m=+1348.717861121" watchObservedRunningTime="2026-01-30 21:36:49.983459006 +0000 UTC m=+1348.729281655" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.888867 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889588 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889606 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889621 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55099194-6cb2-437d-ae0d-a08c104de380" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889628 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="55099194-6cb2-437d-ae0d-a08c104de380" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889646 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889652 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889672 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="init" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889678 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="init" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889689 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889696 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889709 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93341dcd-a293-4879-8baf-855556383780" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889715 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93341dcd-a293-4879-8baf-855556383780" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889726 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889734 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889744 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889750 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889759 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889765 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889777 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889783 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: E0130 21:36:50.889796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889802 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.889982 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890004 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="55099194-6cb2-437d-ae0d-a08c104de380" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890015 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890025 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890036 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890046 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890062 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" containerName="mariadb-account-create-update" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890074 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f39785c-2919-4c29-8405-fd314710c587" containerName="dnsmasq-dns" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890080 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="93341dcd-a293-4879-8baf-855556383780" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890091 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" containerName="mariadb-database-create" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.890789 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.911139 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.973533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.994798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:50 crc kubenswrapper[4751]: I0130 21:36:50.995094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.041617 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.041826 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-pl94b" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" containerID="cri-o://6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1" gracePeriod=10 Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.099865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.100451 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.101410 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.101786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.102722 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.105545 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.109739 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.180703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"mysqld-exporter-openstack-cell1-db-create-29gtt\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.202928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.203049 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.207671 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.295892 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-pl94b" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.305214 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.305361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.306573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.323339 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"mysqld-exporter-e51b-account-create-update-bskb2\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.421623 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.776957 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.785939 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8xlxv"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.880568 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.971167 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.987619 4751 generic.go:334] "Generic (PLEG): container finished" podID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" exitCode=0 Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.990294 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be55860-0016-49cf-9505-9692dd9ccd36" path="/var/lib/kubelet/pods/7be55860-0016-49cf-9505-9692dd9ccd36/volumes" Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.990825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerDied","Data":"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.991314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerStarted","Data":"bfc01980f997333ad874c465402aadd998785e21bd958f0233d9e3ee82f2fd2d"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995390 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerID="6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1" exitCode=0 Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995468 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerDied","Data":"6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995496 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-pl94b" event={"ID":"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809","Type":"ContainerDied","Data":"5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9"} Jan 30 21:36:51 crc kubenswrapper[4751]: I0130 21:36:51.995507 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5084059983d5e17529806552161f9ac2cf353d5b0f25b7a0a25c23ba8ae664c9" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.000476 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerID="dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23" exitCode=0 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.000546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerDied","Data":"dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23"} Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.009990 4751 generic.go:334] "Generic (PLEG): container finished" podID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerID="6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9" exitCode=0 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.010082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerDied","Data":"6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9"} Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.015783 4751 generic.go:334] "Generic (PLEG): container finished" podID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerID="cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f" exitCode=0 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.015906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerDied","Data":"cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f"} Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.091268 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.140959 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.234970 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235408 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235481 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235617 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.235639 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.252515 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf" (OuterVolumeSpecName: "kube-api-access-hk2gf") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "kube-api-access-hk2gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.315001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.316673 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.324975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config" (OuterVolumeSpecName: "config") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.336902 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") pod \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\" (UID: \"aa6dba67-6d0e-4b49-b9dd-0905f6ffe809\") " Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338789 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk2gf\" (UniqueName: \"kubernetes.io/projected/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-kube-api-access-hk2gf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338805 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338817 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338827 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: W0130 21:36:52.338897 4751 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.338909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" (UID: "aa6dba67-6d0e-4b49-b9dd-0905f6ffe809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.348161 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b64b75d5d-kgc46" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" containerID="cri-o://bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194" gracePeriod=15 Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.441390 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:52 crc kubenswrapper[4751]: I0130 21:36:52.543027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:36:52 crc kubenswrapper[4751]: E0130 21:36:52.543219 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 21:36:52 crc kubenswrapper[4751]: E0130 21:36:52.543247 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 21:36:52 crc kubenswrapper[4751]: E0130 21:36:52.543315 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift podName:4f6a1442-f7f7-499a-a7d5-c354d76ba9d5 nodeName:}" failed. No retries permitted until 2026-01-30 21:37:08.54329818 +0000 UTC m=+1367.289120819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift") pod "swift-storage-0" (UID: "4f6a1442-f7f7-499a-a7d5-c354d76ba9d5") : configmap "swift-ring-files" not found Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.026606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerStarted","Data":"654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.027070 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.030589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerStarted","Data":"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.030838 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.033859 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerStarted","Data":"6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.046160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.055973 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b64b75d5d-kgc46_bf03a732-e32e-410a-ae17-1573a2854475/console/0.log" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.056318 4751 generic.go:334] "Generic (PLEG): container finished" podID="bf03a732-e32e-410a-ae17-1573a2854475" containerID="bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194" exitCode=2 Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.056441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerDied","Data":"bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.068657 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerStarted","Data":"c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.069499 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.069799 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371971.784992 podStartE2EDuration="1m5.069783851s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:51.233868574 +0000 UTC m=+1289.979691223" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.050532577 +0000 UTC m=+1351.796355236" watchObservedRunningTime="2026-01-30 21:36:53.069783851 +0000 UTC m=+1351.815606500" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.071995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerStarted","Data":"8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.072257 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.073682 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-pl94b" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.073825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerStarted","Data":"a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.073857 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerStarted","Data":"81a17dec6aef6b0552f0eafd7b80b580f1d50832eef7c25f9ef93e23411b5b8e"} Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.138453 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371971.716343 podStartE2EDuration="1m5.138432162s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:50.661021804 +0000 UTC m=+1289.406844453" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.109262354 +0000 UTC m=+1351.855085013" watchObservedRunningTime="2026-01-30 21:36:53.138432162 +0000 UTC m=+1351.884254811" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.138907 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" podStartSLOduration=3.138902554 podStartE2EDuration="3.138902554s" podCreationTimestamp="2026-01-30 21:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.125877427 +0000 UTC m=+1351.871700076" watchObservedRunningTime="2026-01-30 21:36:53.138902554 +0000 UTC m=+1351.884725203" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.163961 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.172068 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-pl94b"] Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.239430 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371971.615358 podStartE2EDuration="1m5.239416987s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:51.152886971 +0000 UTC m=+1289.898709620" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.211928873 +0000 UTC m=+1351.957751512" watchObservedRunningTime="2026-01-30 21:36:53.239416987 +0000 UTC m=+1351.985239636" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.268455 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" podStartSLOduration=2.268441802 podStartE2EDuration="2.268441802s" podCreationTimestamp="2026-01-30 21:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:36:53.264169108 +0000 UTC m=+1352.009991757" watchObservedRunningTime="2026-01-30 21:36:53.268441802 +0000 UTC m=+1352.014264451" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.376383 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=39.937956416 podStartE2EDuration="1m5.376358222s" podCreationTimestamp="2026-01-30 21:35:48 +0000 UTC" firstStartedPulling="2026-01-30 21:35:51.151582247 +0000 UTC m=+1289.897404896" lastFinishedPulling="2026-01-30 21:36:16.589984013 +0000 UTC m=+1315.335806702" observedRunningTime="2026-01-30 21:36:53.364128286 +0000 UTC m=+1352.109950945" watchObservedRunningTime="2026-01-30 21:36:53.376358222 +0000 UTC m=+1352.122180891" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.446393 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b64b75d5d-kgc46_bf03a732-e32e-410a-ae17-1573a2854475/console/0.log" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.446451 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.567888 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.567974 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568013 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568106 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.568240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") pod \"bf03a732-e32e-410a-ae17-1573a2854475\" (UID: \"bf03a732-e32e-410a-ae17-1573a2854475\") " Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca" (OuterVolumeSpecName: "service-ca") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config" (OuterVolumeSpecName: "console-config") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569566 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.569773 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.575528 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.575501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.584834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq" (OuterVolumeSpecName: "kube-api-access-zn5wq") pod "bf03a732-e32e-410a-ae17-1573a2854475" (UID: "bf03a732-e32e-410a-ae17-1573a2854475"). InnerVolumeSpecName "kube-api-access-zn5wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670271 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670474 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670549 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn5wq\" (UniqueName: \"kubernetes.io/projected/bf03a732-e32e-410a-ae17-1573a2854475-kube-api-access-zn5wq\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670605 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670657 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf03a732-e32e-410a-ae17-1573a2854475-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670707 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.670758 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf03a732-e32e-410a-ae17-1573a2854475-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.912610 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:36:53 crc kubenswrapper[4751]: E0130 21:36:53.913005 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913023 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" Jan 30 21:36:53 crc kubenswrapper[4751]: E0130 21:36:53.913041 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="init" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913047 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="init" Jan 30 21:36:53 crc kubenswrapper[4751]: E0130 21:36:53.913061 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913068 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913253 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" containerName="dnsmasq-dns" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913272 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf03a732-e32e-410a-ae17-1573a2854475" containerName="console" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.913889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.917098 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.917695 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qdcvb" Jan 30 21:36:53 crc kubenswrapper[4751]: I0130 21:36:53.933062 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.037626 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6dba67-6d0e-4b49-b9dd-0905f6ffe809" path="/var/lib/kubelet/pods/aa6dba67-6d0e-4b49-b9dd-0905f6ffe809/volumes" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.093429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.093648 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.093890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.094053 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.102091 4751 generic.go:334] "Generic (PLEG): container finished" podID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerID="a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107" exitCode=0 Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.102167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerDied","Data":"a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107"} Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.104566 4751 generic.go:334] "Generic (PLEG): container finished" podID="d36824ca-c5a8-4514-9276-e49126a66018" containerID="6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628" exitCode=0 Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.104641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerDied","Data":"6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628"} Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.107054 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b64b75d5d-kgc46_bf03a732-e32e-410a-ae17-1573a2854475/console/0.log" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.107233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b64b75d5d-kgc46" event={"ID":"bf03a732-e32e-410a-ae17-1573a2854475","Type":"ContainerDied","Data":"940073f1b9050f0a93c1aa8e842c9477fdedfec5ed669f60a6eea0cf2c8dd11a"} Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.107284 4751 scope.go:117] "RemoveContainer" containerID="bddd330cf13a903e94930cf7c65192196ece6d61e6ec543ac96c6b64e5e23194" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.108846 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b64b75d5d-kgc46" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.179891 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.189365 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b64b75d5d-kgc46"] Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195699 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195777 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195866 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.195985 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.200635 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.200879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.201697 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.212310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"glance-db-sync-r8wrn\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.229946 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:36:54 crc kubenswrapper[4751]: I0130 21:36:54.872314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:36:54 crc kubenswrapper[4751]: W0130 21:36:54.877172 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32a93444_0221_40b7_9869_428788112ae2.slice/crio-7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a WatchSource:0}: Error finding container 7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a: Status 404 returned error can't find the container with id 7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a Jan 30 21:36:55 crc kubenswrapper[4751]: I0130 21:36:55.155413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerStarted","Data":"7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a"} Jan 30 21:36:55 crc kubenswrapper[4751]: I0130 21:36:55.988261 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf03a732-e32e-410a-ae17-1573a2854475" path="/var/lib/kubelet/pods/bf03a732-e32e-410a-ae17-1573a2854475/volumes" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.769782 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.771221 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.773431 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.778537 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.963867 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:56 crc kubenswrapper[4751]: I0130 21:36:56.963950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.065833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.065888 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.066679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.089524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"root-account-create-update-2xmqz\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.091119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.187977 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.193995 4751 generic.go:334] "Generic (PLEG): container finished" podID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerID="d406940fb6742e9578d31d784c5dc7b728af135cada6bb76ae850b8c64dbd1f2" exitCode=0 Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.194062 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerDied","Data":"d406940fb6742e9578d31d784c5dc7b728af135cada6bb76ae850b8c64dbd1f2"} Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.197920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" event={"ID":"d36824ca-c5a8-4514-9276-e49126a66018","Type":"ContainerDied","Data":"bfc01980f997333ad874c465402aadd998785e21bd958f0233d9e3ee82f2fd2d"} Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.197959 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfc01980f997333ad874c465402aadd998785e21bd958f0233d9e3ee82f2fd2d" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.197941 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-29gtt" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.200242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" event={"ID":"1f2ec939-8595-4611-a636-a46fffaa8ebf","Type":"ContainerDied","Data":"81a17dec6aef6b0552f0eafd7b80b580f1d50832eef7c25f9ef93e23411b5b8e"} Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.200265 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a17dec6aef6b0552f0eafd7b80b580f1d50832eef7c25f9ef93e23411b5b8e" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.224044 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") pod \"1f2ec939-8595-4611-a636-a46fffaa8ebf\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376512 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") pod \"d36824ca-c5a8-4514-9276-e49126a66018\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") pod \"1f2ec939-8595-4611-a636-a46fffaa8ebf\" (UID: \"1f2ec939-8595-4611-a636-a46fffaa8ebf\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.376791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") pod \"d36824ca-c5a8-4514-9276-e49126a66018\" (UID: \"d36824ca-c5a8-4514-9276-e49126a66018\") " Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.378442 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d36824ca-c5a8-4514-9276-e49126a66018" (UID: "d36824ca-c5a8-4514-9276-e49126a66018"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.378733 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f2ec939-8595-4611-a636-a46fffaa8ebf" (UID: "1f2ec939-8595-4611-a636-a46fffaa8ebf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.383815 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8" (OuterVolumeSpecName: "kube-api-access-cbcl8") pod "d36824ca-c5a8-4514-9276-e49126a66018" (UID: "d36824ca-c5a8-4514-9276-e49126a66018"). InnerVolumeSpecName "kube-api-access-cbcl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.387665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm" (OuterVolumeSpecName: "kube-api-access-nbqbm") pod "1f2ec939-8595-4611-a636-a46fffaa8ebf" (UID: "1f2ec939-8595-4611-a636-a46fffaa8ebf"). InnerVolumeSpecName "kube-api-access-nbqbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480362 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbcl8\" (UniqueName: \"kubernetes.io/projected/d36824ca-c5a8-4514-9276-e49126a66018-kube-api-access-cbcl8\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480397 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbqbm\" (UniqueName: \"kubernetes.io/projected/1f2ec939-8595-4611-a636-a46fffaa8ebf-kube-api-access-nbqbm\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480407 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36824ca-c5a8-4514-9276-e49126a66018-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.480415 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f2ec939-8595-4611-a636-a46fffaa8ebf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:57 crc kubenswrapper[4751]: I0130 21:36:57.712200 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:36:57 crc kubenswrapper[4751]: W0130 21:36:57.718106 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27ed4323_ecba_4f90_b7ea_a5a0ff7713d6.slice/crio-89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d WatchSource:0}: Error finding container 89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d: Status 404 returned error can't find the container with id 89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.212151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerStarted","Data":"a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b"} Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.215272 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerStarted","Data":"0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783"} Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.215453 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerStarted","Data":"89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d"} Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.215476 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e51b-account-create-update-bskb2" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.252972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.761409495 podStartE2EDuration="1m3.252955772s" podCreationTimestamp="2026-01-30 21:35:55 +0000 UTC" firstStartedPulling="2026-01-30 21:36:19.593636791 +0000 UTC m=+1318.339459460" lastFinishedPulling="2026-01-30 21:36:57.085183098 +0000 UTC m=+1355.831005737" observedRunningTime="2026-01-30 21:36:58.247930538 +0000 UTC m=+1356.993753187" watchObservedRunningTime="2026-01-30 21:36:58.252955772 +0000 UTC m=+1356.998778421" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.640935 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g9s48" podUID="fbc382fd-1513-4137-b801-5627cc5886ea" containerName="ovn-controller" probeResult="failure" output=< Jan 30 21:36:58 crc kubenswrapper[4751]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 21:36:58 crc kubenswrapper[4751]: > Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.701841 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.710702 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.732124 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f4rx8" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804665 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804849 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.804949 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805085 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") pod \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\" (UID: \"70af95fb-5ca8-4482-a1bc-81b1891e0da7\") " Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805734 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.805882 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.806183 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/70af95fb-5ca8-4482-a1bc-81b1891e0da7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.806201 4751 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.809827 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc" (OuterVolumeSpecName: "kube-api-access-mlmhc") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "kube-api-access-mlmhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.814767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.841271 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.842278 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts" (OuterVolumeSpecName: "scripts") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.868972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70af95fb-5ca8-4482-a1bc-81b1891e0da7" (UID: "70af95fb-5ca8-4482-a1bc-81b1891e0da7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907931 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70af95fb-5ca8-4482-a1bc-81b1891e0da7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907960 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907971 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlmhc\" (UniqueName: \"kubernetes.io/projected/70af95fb-5ca8-4482-a1bc-81b1891e0da7-kube-api-access-mlmhc\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907979 4751 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.907989 4751 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/70af95fb-5ca8-4482-a1bc-81b1891e0da7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957111 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:36:58 crc kubenswrapper[4751]: E0130 21:36:58.957492 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36824ca-c5a8-4514-9276-e49126a66018" containerName="mariadb-database-create" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957512 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36824ca-c5a8-4514-9276-e49126a66018" containerName="mariadb-database-create" Jan 30 21:36:58 crc kubenswrapper[4751]: E0130 21:36:58.957542 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerName="mariadb-account-create-update" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957549 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerName="mariadb-account-create-update" Jan 30 21:36:58 crc kubenswrapper[4751]: E0130 21:36:58.957564 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerName="swift-ring-rebalance" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.957570 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerName="swift-ring-rebalance" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.959862 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36824ca-c5a8-4514-9276-e49126a66018" containerName="mariadb-database-create" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.959887 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="70af95fb-5ca8-4482-a1bc-81b1891e0da7" containerName="swift-ring-rebalance" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.959920 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" containerName="mariadb-account-create-update" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.960641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.964034 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 21:36:58 crc kubenswrapper[4751]: I0130 21:36:58.972123 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016430 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016709 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.016726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118390 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118417 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118817 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118857 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.118859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.119539 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.120475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.135977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"ovn-controller-g9s48-config-hs6bc\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.228374 4751 generic.go:334] "Generic (PLEG): container finished" podID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerID="0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783" exitCode=0 Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.228477 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerDied","Data":"0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783"} Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.231974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vvq25" event={"ID":"70af95fb-5ca8-4482-a1bc-81b1891e0da7","Type":"ContainerDied","Data":"68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8"} Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.232000 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68907368515e04efc423e96f6ad0f34c1d76a72cb81074b939310269d488cbe8" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.232031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vvq25" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.291157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.790584 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.834831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") pod \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.835068 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") pod \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\" (UID: \"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6\") " Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.835462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" (UID: "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.835969 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.839981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv" (OuterVolumeSpecName: "kube-api-access-qcsnv") pod "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" (UID: "27ed4323-ecba-4f90-b7ea-a5a0ff7713d6"). InnerVolumeSpecName "kube-api-access-qcsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.937452 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcsnv\" (UniqueName: \"kubernetes.io/projected/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6-kube-api-access-qcsnv\") on node \"crc\" DevicePath \"\"" Jan 30 21:36:59 crc kubenswrapper[4751]: I0130 21:36:59.946174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:36:59 crc kubenswrapper[4751]: W0130 21:36:59.949451 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9572d2c_ecb1_4249_9cc0_9a3881e6960c.slice/crio-d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48 WatchSource:0}: Error finding container d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48: Status 404 returned error can't find the container with id d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48 Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.262274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48-config-hs6bc" event={"ID":"d9572d2c-ecb1-4249-9cc0-9a3881e6960c","Type":"ContainerStarted","Data":"d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48"} Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.277645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2xmqz" event={"ID":"27ed4323-ecba-4f90-b7ea-a5a0ff7713d6","Type":"ContainerDied","Data":"89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d"} Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.277713 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89532bad97a6f7064b4010584f795d1f8d343d93067541a7bf6a67ea4dbcd25d" Jan 30 21:37:00 crc kubenswrapper[4751]: I0130 21:37:00.277723 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2xmqz" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.256017 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:37:01 crc kubenswrapper[4751]: E0130 21:37:01.256437 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerName="mariadb-account-create-update" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.256454 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerName="mariadb-account-create-update" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.256637 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" containerName="mariadb-account-create-update" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.257280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.268175 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.284265 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.293576 4751 generic.go:334] "Generic (PLEG): container finished" podID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerID="8ab084e559e8069a5cdd46d2514468a22129fd354769c2604ada982fbc95ae13" exitCode=0 Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.293662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48-config-hs6bc" event={"ID":"d9572d2c-ecb1-4249-9cc0-9a3881e6960c","Type":"ContainerDied","Data":"8ab084e559e8069a5cdd46d2514468a22129fd354769c2604ada982fbc95ae13"} Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.361843 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.362043 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.362075 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.464323 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.464675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.464742 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.472991 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.473885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.482913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"mysqld-exporter-0\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " pod="openstack/mysqld-exporter-0" Jan 30 21:37:01 crc kubenswrapper[4751]: I0130 21:37:01.577192 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.080218 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.130199 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:37:02 crc kubenswrapper[4751]: W0130 21:37:02.133501 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea4b0a2_4b62_47b1_b925_f78af9c42125.slice/crio-de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506 WatchSource:0}: Error finding container de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506: Status 404 returned error can't find the container with id de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506 Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.304831 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerStarted","Data":"de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506"} Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.715967 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803563 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803714 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.803854 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") pod \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\" (UID: \"d9572d2c-ecb1-4249-9cc0-9a3881e6960c\") " Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.805032 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run" (OuterVolumeSpecName: "var-run") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.806874 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.806935 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.807398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.807733 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts" (OuterVolumeSpecName: "scripts") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.834757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl" (OuterVolumeSpecName: "kube-api-access-7jznl") pod "d9572d2c-ecb1-4249-9cc0-9a3881e6960c" (UID: "d9572d2c-ecb1-4249-9cc0-9a3881e6960c"). InnerVolumeSpecName "kube-api-access-7jznl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916594 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jznl\" (UniqueName: \"kubernetes.io/projected/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-kube-api-access-7jznl\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916798 4751 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916858 4751 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916913 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.916976 4751 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:02 crc kubenswrapper[4751]: I0130 21:37:02.917034 4751 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9572d2c-ecb1-4249-9cc0-9a3881e6960c-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.332901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g9s48-config-hs6bc" event={"ID":"d9572d2c-ecb1-4249-9cc0-9a3881e6960c","Type":"ContainerDied","Data":"d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48"} Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.332949 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d243dff1baf5f12ba2f921865d6322fa5c9ae08ac922a4422f0173c54010fe48" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.333025 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g9s48-config-hs6bc" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.659311 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-g9s48" Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.828880 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.839756 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g9s48-config-hs6bc"] Jan 30 21:37:03 crc kubenswrapper[4751]: I0130 21:37:03.989942 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" path="/var/lib/kubelet/pods/d9572d2c-ecb1-4249-9cc0-9a3881e6960c/volumes" Jan 30 21:37:08 crc kubenswrapper[4751]: I0130 21:37:08.545155 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:37:08 crc kubenswrapper[4751]: I0130 21:37:08.583571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4f6a1442-f7f7-499a-a7d5-c354d76ba9d5-etc-swift\") pod \"swift-storage-0\" (UID: \"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5\") " pod="openstack/swift-storage-0" Jan 30 21:37:08 crc kubenswrapper[4751]: I0130 21:37:08.772151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.032358 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.392213 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.406962 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Jan 30 21:37:10 crc kubenswrapper[4751]: I0130 21:37:10.431632 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:37:11 crc kubenswrapper[4751]: I0130 21:37:11.408184 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 21:37:11 crc kubenswrapper[4751]: W0130 21:37:11.757581 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6a1442_f7f7_499a_a7d5_c354d76ba9d5.slice/crio-360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774 WatchSource:0}: Error finding container 360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774: Status 404 returned error can't find the container with id 360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774 Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.080675 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.083061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.286346 4751 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6f39785c-2919-4c29-8405-fd314710c587"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6f39785c-2919-4c29-8405-fd314710c587] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6f39785c_2919_4c29_8405_fd314710c587.slice" Jan 30 21:37:12 crc kubenswrapper[4751]: E0130 21:37:12.286406 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod6f39785c-2919-4c29-8405-fd314710c587] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod6f39785c-2919-4c29-8405-fd314710c587] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6f39785c_2919_4c29_8405_fd314710c587.slice" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" podUID="6f39785c-2919-4c29-8405-fd314710c587" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.439710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerStarted","Data":"291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24"} Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.442443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"360eed0d27f0ed842a5c448ba4083b3bebbf3ca8bf8301e59f56394be2cdb774"} Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.444545 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerStarted","Data":"20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472"} Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.444578 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-gcttq" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.445969 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.471304 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-r8wrn" podStartSLOduration=3.498873763 podStartE2EDuration="19.471286814s" podCreationTimestamp="2026-01-30 21:36:53 +0000 UTC" firstStartedPulling="2026-01-30 21:36:54.879721492 +0000 UTC m=+1353.625544141" lastFinishedPulling="2026-01-30 21:37:10.852134543 +0000 UTC m=+1369.597957192" observedRunningTime="2026-01-30 21:37:12.457776023 +0000 UTC m=+1371.203598682" watchObservedRunningTime="2026-01-30 21:37:12.471286814 +0000 UTC m=+1371.217109463" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.488443 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.8059720270000001 podStartE2EDuration="11.4883974s" podCreationTimestamp="2026-01-30 21:37:01 +0000 UTC" firstStartedPulling="2026-01-30 21:37:02.135888205 +0000 UTC m=+1360.881710854" lastFinishedPulling="2026-01-30 21:37:11.818313568 +0000 UTC m=+1370.564136227" observedRunningTime="2026-01-30 21:37:12.475921097 +0000 UTC m=+1371.221743746" watchObservedRunningTime="2026-01-30 21:37:12.4883974 +0000 UTC m=+1371.234220049" Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.549132 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:37:12 crc kubenswrapper[4751]: I0130 21:37:12.560927 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-gcttq"] Jan 30 21:37:13 crc kubenswrapper[4751]: I0130 21:37:13.469637 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"7a4a522d1102a82b6cc3bc3de1b26131ae54a72e2aa27d700bc820cb187224ad"} Jan 30 21:37:13 crc kubenswrapper[4751]: I0130 21:37:13.988823 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f39785c-2919-4c29-8405-fd314710c587" path="/var/lib/kubelet/pods/6f39785c-2919-4c29-8405-fd314710c587/volumes" Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.481246 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"8571cc1450ceb2e32cfddbd2edae931c75b4d6f0afbf409ef929fc9df73ad2dc"} Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.481300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"fbb5a6849ee00d84a95512bbd43bb47ae034c963c9a72669a99f85048a502147"} Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.481314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"894135ba89b7e160a219d64268b79c0eb9ca3709aa94a1511447855dad625fe3"} Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.647501 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.647895 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" containerID="cri-o://5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb" gracePeriod=600 Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.648024 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" containerID="cri-o://a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b" gracePeriod=600 Jan 30 21:37:14 crc kubenswrapper[4751]: I0130 21:37:14.647974 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" containerID="cri-o://28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f" gracePeriod=600 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517098 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b" exitCode=0 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517588 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f" exitCode=0 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517596 4751 generic.go:334] "Generic (PLEG): container finished" podID="d56430b1-227c-4074-8d43-86953ab9f911" containerID="5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb" exitCode=0 Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b"} Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f"} Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.517640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb"} Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.777695 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.923673 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924129 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924171 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924206 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924261 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924484 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924824 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.924879 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"d56430b1-227c-4074-8d43-86953ab9f911\" (UID: \"d56430b1-227c-4074-8d43-86953ab9f911\") " Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.925001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.925687 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926052 4751 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926072 4751 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926084 4751 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d56430b1-227c-4074-8d43-86953ab9f911-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.926991 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.936431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out" (OuterVolumeSpecName: "config-out") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.940161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config" (OuterVolumeSpecName: "config") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.944719 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w" (OuterVolumeSpecName: "kube-api-access-b2g5w") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "kube-api-access-b2g5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.950601 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.972114 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config" (OuterVolumeSpecName: "web-config") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:15 crc kubenswrapper[4751]: I0130 21:37:15.988389 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d56430b1-227c-4074-8d43-86953ab9f911" (UID: "d56430b1-227c-4074-8d43-86953ab9f911"). InnerVolumeSpecName "pvc-7297f1d7-6116-4005-9637-09e45a6844de". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027730 4751 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d56430b1-227c-4074-8d43-86953ab9f911-config-out\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027785 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") on node \"crc\" " Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027798 4751 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027808 4751 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-web-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027819 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027828 4751 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d56430b1-227c-4074-8d43-86953ab9f911-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.027839 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2g5w\" (UniqueName: \"kubernetes.io/projected/d56430b1-227c-4074-8d43-86953ab9f911-kube-api-access-b2g5w\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.068464 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.068610 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7297f1d7-6116-4005-9637-09e45a6844de" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de") on node "crc" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.139844 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"89e2e33fdba45eeeea45bc8f0586122609ab7724ff300adcdd8032ef2cbd45f3"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"241b4e9660fbb61180d91be5352ccc244eeb422fac2e80019c074b1eb101d492"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528570 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"9cf88493556e27c29ac9ce94fd84a890a829378eeb33c3920da155c748578804"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.528580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"14a11d7b12287a9f9a1dae58494a3df13105c63a9fbc0ce0e54f7e7cc214e4bc"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.530536 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d56430b1-227c-4074-8d43-86953ab9f911","Type":"ContainerDied","Data":"14a262a32c578ab480de0003e92d828da04b4354e1d5c9b7efbfca95d406a828"} Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.530574 4751 scope.go:117] "RemoveContainer" containerID="a9ca9e07790f5d346c1a4232c0516dbab0611a34ac86ef5489631c5577ce240b" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.530611 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.546309 4751 scope.go:117] "RemoveContainer" containerID="28f17c906b227a5af5cc4ace126e147801603741df157b46ecf7a45821fe1d9f" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.559719 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.571009 4751 scope.go:117] "RemoveContainer" containerID="5e628f84f2b1e0bad7fea4bb8e7b42341154ce8e229a8f36477102accdee0cfb" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.571563 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.592245 4751 scope.go:117] "RemoveContainer" containerID="0e8380c6ff95a924287e8674599018ad6d281082245c17624c192e7eea73966f" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.593614 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594044 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerName="ovn-config" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594062 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerName="ovn-config" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594087 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594093 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594108 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594115 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594129 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="init-config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594134 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="init-config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: E0130 21:37:16.594145 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594151 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594314 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9572d2c-ecb1-4249-9cc0-9a3881e6960c" containerName="ovn-config" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594345 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="thanos-sidecar" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594365 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="prometheus" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.594374 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56430b1-227c-4074-8d43-86953ab9f911" containerName="config-reloader" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.601498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.603783 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.606062 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.608566 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hjfsj" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.608737 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.608920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609221 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609368 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.609586 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.616147 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpth\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-kube-api-access-hwpth\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751699 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751725 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e7af95c-7ba2-4e0b-9947-795d9629744c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.751997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.752018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwpth\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-kube-api-access-hwpth\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855439 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855498 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855578 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e7af95c-7ba2-4e0b-9947-795d9629744c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855666 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855689 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.855706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.857237 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.857242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.866000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e7af95c-7ba2-4e0b-9947-795d9629744c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.877452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.877565 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.882041 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.882970 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.884554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.884893 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.885183 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e7af95c-7ba2-4e0b-9947-795d9629744c-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.887310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwpth\" (UniqueName: \"kubernetes.io/projected/3e7af95c-7ba2-4e0b-9947-795d9629744c-kube-api-access-hwpth\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.898053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e7af95c-7ba2-4e0b-9947-795d9629744c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.898912 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:37:16 crc kubenswrapper[4751]: I0130 21:37:16.898951 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a95dfeb2de129561acc13a0d8e1495cdeeea1e8a0c06c82206df350d4e35d0bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:17 crc kubenswrapper[4751]: I0130 21:37:17.004515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7297f1d7-6116-4005-9637-09e45a6844de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7297f1d7-6116-4005-9637-09e45a6844de\") pod \"prometheus-metric-storage-0\" (UID: \"3e7af95c-7ba2-4e0b-9947-795d9629744c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:17 crc kubenswrapper[4751]: I0130 21:37:17.036032 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:17 crc kubenswrapper[4751]: I0130 21:37:17.517532 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 21:37:17 crc kubenswrapper[4751]: W0130 21:37:17.834957 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e7af95c_7ba2_4e0b_9947_795d9629744c.slice/crio-38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5 WatchSource:0}: Error finding container 38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5: Status 404 returned error can't find the container with id 38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5 Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.008421 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56430b1-227c-4074-8d43-86953ab9f911" path="/var/lib/kubelet/pods/d56430b1-227c-4074-8d43-86953ab9f911/volumes" Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.588673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"029915db9a08c3651c62c1b28ce1b3b157611e1da6b3b0a09e4689e28b329d21"} Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.589231 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"32f4cf26b4d5634a7aa234f7edaf69a22cde94fecef4036f3b4ef628ccd910d9"} Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.589776 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"f9fb362e2221b63f1fac87f00313c82aa062d1442c5161ccbcbb2abb1d5938ae"} Jan 30 21:37:18 crc kubenswrapper[4751]: I0130 21:37:18.598990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"38a83d0085596e4fd29a7a96b241b5a7196453da6e5a891acdb5047466f9f5a5"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"b6b9c86b1cb21a79478fbfef2c52220c36f6bd72bfcd705ce1c7e42447a24e9f"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"cf131aae7d90e0d39d7e40de9dc178eddb608f800985f0805442fdcfceac0037"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"1fc3868720df81441043b8cea6512bca4b7cea3c696f56e7a1d920098fc6f8f7"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.617904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4f6a1442-f7f7-499a-a7d5-c354d76ba9d5","Type":"ContainerStarted","Data":"1bc1ee641cb136a2de0216fd614525dbcc30607450c491622462f9153d0700ef"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.619460 4751 generic.go:334] "Generic (PLEG): container finished" podID="32a93444-0221-40b7-9869-428788112ae2" containerID="291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24" exitCode=0 Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.619505 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerDied","Data":"291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24"} Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.680002 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.555556389 podStartE2EDuration="44.679985321s" podCreationTimestamp="2026-01-30 21:36:35 +0000 UTC" firstStartedPulling="2026-01-30 21:37:11.765601631 +0000 UTC m=+1370.511424290" lastFinishedPulling="2026-01-30 21:37:17.890030553 +0000 UTC m=+1376.635853222" observedRunningTime="2026-01-30 21:37:19.676474117 +0000 UTC m=+1378.422296776" watchObservedRunningTime="2026-01-30 21:37:19.679985321 +0000 UTC m=+1378.425807970" Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.995411 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.997420 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:19 crc kubenswrapper[4751]: I0130 21:37:19.999759 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.021061 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.031492 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039068 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.039396 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152474 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152677 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.152801 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.153576 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.154113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.155840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.156505 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.170718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.381451 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"dnsmasq-dns-5c79d794d7-cplrw\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.391544 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.405582 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 30 21:37:20 crc kubenswrapper[4751]: I0130 21:37:20.614788 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.241736 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.498823 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.643204 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"177d01f9395c57a0704f8e3be47f47ddcda9844296cb5595f9c79bfbfade602b"} Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.645822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerStarted","Data":"8aa6a293385188bd134bcd72ff7081e89e7403adf6b36ab32f6d6b5dfd8657b9"} Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.647517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-r8wrn" event={"ID":"32a93444-0221-40b7-9869-428788112ae2","Type":"ContainerDied","Data":"7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a"} Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.647562 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a175ccea97ecde04362b48b6632c136c6a64e0161ec0c91d00f4d408467a89a" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.647585 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-r8wrn" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.682471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.682814 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.683515 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.683707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") pod \"32a93444-0221-40b7-9869-428788112ae2\" (UID: \"32a93444-0221-40b7-9869-428788112ae2\") " Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.686674 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2" (OuterVolumeSpecName: "kube-api-access-gcdb2") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "kube-api-access-gcdb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.696300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.726649 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.743887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data" (OuterVolumeSpecName: "config-data") pod "32a93444-0221-40b7-9869-428788112ae2" (UID: "32a93444-0221-40b7-9869-428788112ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789882 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789912 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcdb2\" (UniqueName: \"kubernetes.io/projected/32a93444-0221-40b7-9869-428788112ae2-kube-api-access-gcdb2\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789928 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:21 crc kubenswrapper[4751]: I0130 21:37:21.789937 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32a93444-0221-40b7-9869-428788112ae2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.242753 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.280773 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:22 crc kubenswrapper[4751]: E0130 21:37:22.281304 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a93444-0221-40b7-9869-428788112ae2" containerName="glance-db-sync" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.281344 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a93444-0221-40b7-9869-428788112ae2" containerName="glance-db-sync" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.281626 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a93444-0221-40b7-9869-428788112ae2" containerName="glance-db-sync" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.282994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.304551 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.401932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.402024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.402048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.503961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504814 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504814 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.504995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.505463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.520974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"dnsmasq-dns-5f59b8f679-jnhv7\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.599864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.881290 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.882961 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:22 crc kubenswrapper[4751]: I0130 21:37:22.933416 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.024129 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.024207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.053775 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.055042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.092500 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.094281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.098902 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.125594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.125687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.126860 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.127469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.155146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"barbican-db-create-2gxmh\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.168874 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227176 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227281 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.227314 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.259611 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.322313 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.325092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.345601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.348834 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.364589 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.365407 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"cinder-db-create-zhgsw\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.375197 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.415577 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.417584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.419877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"barbican-31bb-account-create-update-w6h5f\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420063 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420247 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.420388 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.443028 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.447860 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.447995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.448334 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.495229 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.497237 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.504068 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.515994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.517634 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.537668 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550471 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.550868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.551447 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.554288 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.555641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.559752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.567782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"heat-db-create-hr9lv\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.638687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653336 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653475 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.653495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.659944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.664487 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.668056 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.675634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.685975 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.696417 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.698904 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.699101 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerStarted","Data":"58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53"} Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.704439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"keystone-db-sync-z99cv\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.707674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerStarted","Data":"6864e85d2504e6732a265d6ea2bacb5cab1c5dcba817c3a3b4ad3a6ad9332eef"} Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.729064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.749153 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755105 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.755821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.756115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.773051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.782189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"cinder-f7f7-account-create-update-d88cz\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.783490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"heat-2618-account-create-update-fdl95\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.781737 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.799778 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.819917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.858707 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.859426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.875432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"neutron-db-create-lqv47\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.879858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.961237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.961561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:23 crc kubenswrapper[4751]: I0130 21:37:23.962683 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.012744 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"neutron-0f07-account-create-update-fr6kw\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.054783 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.079415 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.079882 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.207744 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.222766 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:37:24 crc kubenswrapper[4751]: W0130 21:37:24.225664 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b9f9eed_02b1_4541_8ebb_34826639233b.slice/crio-d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a WatchSource:0}: Error finding container d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a: Status 404 returned error can't find the container with id d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a Jan 30 21:37:24 crc kubenswrapper[4751]: W0130 21:37:24.229136 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00437219_cb6b_48ad_a0cb_d75b82412ba1.slice/crio-30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2 WatchSource:0}: Error finding container 30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2: Status 404 returned error can't find the container with id 30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2 Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.491370 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.630484 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.647196 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.658688 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:37:24 crc kubenswrapper[4751]: W0130 21:37:24.673545 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bc5d80d_ae17_431d_8e0f_6003af0fa6b1.slice/crio-217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c WatchSource:0}: Error finding container 217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c: Status 404 returned error can't find the container with id 217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.747518 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerStarted","Data":"081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.747879 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerStarted","Data":"f37db6345203c46035cf2c18f2b4711cfad519df170163afdd9effa521c52d7f"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.749940 4751 generic.go:334] "Generic (PLEG): container finished" podID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerID="3fe25ad7467fd6a359800e1d2c4132e606e75ea363c0815021b0fb7427ca7b89" exitCode=0 Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.749988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerDied","Data":"3fe25ad7467fd6a359800e1d2c4132e606e75ea363c0815021b0fb7427ca7b89"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.767567 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-2gxmh" podStartSLOduration=2.767548342 podStartE2EDuration="2.767548342s" podCreationTimestamp="2026-01-30 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:24.767198763 +0000 UTC m=+1383.513021412" watchObservedRunningTime="2026-01-30 21:37:24.767548342 +0000 UTC m=+1383.513370991" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.776625 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerStarted","Data":"51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.776670 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerStarted","Data":"d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.810123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hr9lv" event={"ID":"056813ab-3913-42db-afa1-a79cb8e3a3c9","Type":"ContainerStarted","Data":"e1328141a3657eae04671c0e1b8d5daf9d2fd7acd381b279a7654fac0a691f97"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.818018 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgsw" event={"ID":"00437219-cb6b-48ad-a0cb-d75b82412ba1","Type":"ContainerStarted","Data":"30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.830306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7f7-account-create-update-d88cz" event={"ID":"a9112f9c-911e-47d4-be64-e6f90fa6fa35","Type":"ContainerStarted","Data":"eb7be4270b42ffce4b76f159858fa9a7aa3755769d6109ee488fcb1be59f44c4"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.840909 4751 generic.go:334] "Generic (PLEG): container finished" podID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerID="58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53" exitCode=0 Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.841024 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerDied","Data":"58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.842840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerStarted","Data":"217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.845183 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-31bb-account-create-update-w6h5f" podStartSLOduration=1.845163302 podStartE2EDuration="1.845163302s" podCreationTimestamp="2026-01-30 21:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:24.808624937 +0000 UTC m=+1383.554447586" watchObservedRunningTime="2026-01-30 21:37:24.845163302 +0000 UTC m=+1383.590985941" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.845521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2618-account-create-update-fdl95" event={"ID":"e63e6079-6772-46c3-9ec3-1e01741a210f","Type":"ContainerStarted","Data":"15f677f718a80e7a19e65a1e37fb95a181277e9a1d14c765d1593019e2e9f3c0"} Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.897958 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-zhgsw" podStartSLOduration=2.8979377509999997 podStartE2EDuration="2.897937751s" podCreationTimestamp="2026-01-30 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:24.832095224 +0000 UTC m=+1383.577917873" watchObservedRunningTime="2026-01-30 21:37:24.897937751 +0000 UTC m=+1383.643760400" Jan 30 21:37:24 crc kubenswrapper[4751]: I0130 21:37:24.898452 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.081174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:37:25 crc kubenswrapper[4751]: W0130 21:37:25.163920 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f3bf7e5_bd0d_46f0_b5bf_86fe9e6c428c.slice/crio-a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3 WatchSource:0}: Error finding container a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3: Status 404 returned error can't find the container with id a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.447959 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563585 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563721 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563851 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.563918 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.564029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") pod \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\" (UID: \"a7876a87-ce9e-4d67-a296-cfe228be3d3e\") " Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.576431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh" (OuterVolumeSpecName: "kube-api-access-lrkjh") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "kube-api-access-lrkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.636928 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.652503 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.669925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671473 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671723 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrkjh\" (UniqueName: \"kubernetes.io/projected/a7876a87-ce9e-4d67-a296-cfe228be3d3e-kube-api-access-lrkjh\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671828 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671899 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.671728 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config" (OuterVolumeSpecName: "config") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.713796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7876a87-ce9e-4d67-a296-cfe228be3d3e" (UID: "a7876a87-ce9e-4d67-a296-cfe228be3d3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.774226 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.774272 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7876a87-ce9e-4d67-a296-cfe228be3d3e-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.864727 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerStarted","Data":"5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.865975 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.872427 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerID="51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.872586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerDied","Data":"51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.880035 4751 generic.go:334] "Generic (PLEG): container finished" podID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerID="c5ab688f9b8e1fb82010bd34dac14cc2f514cc43545c635a532a50efe0bee3a6" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.880105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2618-account-create-update-fdl95" event={"ID":"e63e6079-6772-46c3-9ec3-1e01741a210f","Type":"ContainerDied","Data":"c5ab688f9b8e1fb82010bd34dac14cc2f514cc43545c635a532a50efe0bee3a6"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.886868 4751 generic.go:334] "Generic (PLEG): container finished" podID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerID="9b73d59359bfb3a5bef8ccdbc1b9174270c6e66e22c29e992c6a512a45cd76ed" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.886938 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgsw" event={"ID":"00437219-cb6b-48ad-a0cb-d75b82412ba1","Type":"ContainerDied","Data":"9b73d59359bfb3a5bef8ccdbc1b9174270c6e66e22c29e992c6a512a45cd76ed"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.894033 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerStarted","Data":"757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.894084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerStarted","Data":"e3f3f61970d72f82ffba4d9a80464a10e8b9c51e7583102951e1de7d389e2988"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.896741 4751 generic.go:334] "Generic (PLEG): container finished" podID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerID="dbe7739dccd34474fee5592432c44f2757e5e43cc8cb53f953f6011cf0eab9eb" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.896799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7f7-account-create-update-d88cz" event={"ID":"a9112f9c-911e-47d4-be64-e6f90fa6fa35","Type":"ContainerDied","Data":"dbe7739dccd34474fee5592432c44f2757e5e43cc8cb53f953f6011cf0eab9eb"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.897192 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podStartSLOduration=3.897178048 podStartE2EDuration="3.897178048s" podCreationTimestamp="2026-01-30 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:25.888004663 +0000 UTC m=+1384.633827312" watchObservedRunningTime="2026-01-30 21:37:25.897178048 +0000 UTC m=+1384.643000697" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.898894 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerStarted","Data":"5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.898927 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerStarted","Data":"a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.900497 4751 generic.go:334] "Generic (PLEG): container finished" podID="bf1f702d-7084-4e85-add9-15c10223d801" containerID="081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.900535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerDied","Data":"081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.901474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" event={"ID":"a7876a87-ce9e-4d67-a296-cfe228be3d3e","Type":"ContainerDied","Data":"8aa6a293385188bd134bcd72ff7081e89e7403adf6b36ab32f6d6b5dfd8657b9"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.901505 4751 scope.go:117] "RemoveContainer" containerID="58af424fb61237df15655de4cccc59760be3dabac1d92e813f637b367e667a53" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.901608 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cplrw" Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.934030 4751 generic.go:334] "Generic (PLEG): container finished" podID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerID="86dc09eda61ac7de53bc29716e31ede7719959b2e5920e15b3c99ca75f4be060" exitCode=0 Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.934079 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hr9lv" event={"ID":"056813ab-3913-42db-afa1-a79cb8e3a3c9","Type":"ContainerDied","Data":"86dc09eda61ac7de53bc29716e31ede7719959b2e5920e15b3c99ca75f4be060"} Jan 30 21:37:25 crc kubenswrapper[4751]: I0130 21:37:25.992983 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0f07-account-create-update-fr6kw" podStartSLOduration=2.992967634 podStartE2EDuration="2.992967634s" podCreationTimestamp="2026-01-30 21:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:25.978674983 +0000 UTC m=+1384.724497642" watchObservedRunningTime="2026-01-30 21:37:25.992967634 +0000 UTC m=+1384.738790283" Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.059226 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.067355 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cplrw"] Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.950721 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerID="5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4" exitCode=0 Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.950804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerDied","Data":"5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4"} Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.961279 4751 generic.go:334] "Generic (PLEG): container finished" podID="0297c6e3-62f8-49cc-a073-8bb104949456" containerID="757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4" exitCode=0 Jan 30 21:37:26 crc kubenswrapper[4751]: I0130 21:37:26.961412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerDied","Data":"757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4"} Jan 30 21:37:27 crc kubenswrapper[4751]: I0130 21:37:27.996289 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" path="/var/lib/kubelet/pods/a7876a87-ce9e-4d67-a296-cfe228be3d3e/volumes" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.475391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.482059 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.498693 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.539709 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.549093 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.554645 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.573045 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.579809 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.596888 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") pod \"e63e6079-6772-46c3-9ec3-1e01741a210f\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.596966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") pod \"0297c6e3-62f8-49cc-a073-8bb104949456\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.596982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") pod \"0297c6e3-62f8-49cc-a073-8bb104949456\" (UID: \"0297c6e3-62f8-49cc-a073-8bb104949456\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.597006 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") pod \"e63e6079-6772-46c3-9ec3-1e01741a210f\" (UID: \"e63e6079-6772-46c3-9ec3-1e01741a210f\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.599201 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e63e6079-6772-46c3-9ec3-1e01741a210f" (UID: "e63e6079-6772-46c3-9ec3-1e01741a210f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.599352 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0297c6e3-62f8-49cc-a073-8bb104949456" (UID: "0297c6e3-62f8-49cc-a073-8bb104949456"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.607670 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w" (OuterVolumeSpecName: "kube-api-access-c9z5w") pod "e63e6079-6772-46c3-9ec3-1e01741a210f" (UID: "e63e6079-6772-46c3-9ec3-1e01741a210f"). InnerVolumeSpecName "kube-api-access-c9z5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.617848 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72" (OuterVolumeSpecName: "kube-api-access-k6z72") pod "0297c6e3-62f8-49cc-a073-8bb104949456" (UID: "0297c6e3-62f8-49cc-a073-8bb104949456"). InnerVolumeSpecName "kube-api-access-k6z72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.698942 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") pod \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") pod \"00437219-cb6b-48ad-a0cb-d75b82412ba1\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699205 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") pod \"3b9f9eed-02b1-4541-8ebb-34826639233b\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") pod \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\" (UID: \"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") pod \"00437219-cb6b-48ad-a0cb-d75b82412ba1\" (UID: \"00437219-cb6b-48ad-a0cb-d75b82412ba1\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699532 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") pod \"056813ab-3913-42db-afa1-a79cb8e3a3c9\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") pod \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699725 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") pod \"3b9f9eed-02b1-4541-8ebb-34826639233b\" (UID: \"3b9f9eed-02b1-4541-8ebb-34826639233b\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") pod \"056813ab-3913-42db-afa1-a79cb8e3a3c9\" (UID: \"056813ab-3913-42db-afa1-a79cb8e3a3c9\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") pod \"bf1f702d-7084-4e85-add9-15c10223d801\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.699968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") pod \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\" (UID: \"a9112f9c-911e-47d4-be64-e6f90fa6fa35\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700049 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") pod \"bf1f702d-7084-4e85-add9-15c10223d801\" (UID: \"bf1f702d-7084-4e85-add9-15c10223d801\") " Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700511 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9z5w\" (UniqueName: \"kubernetes.io/projected/e63e6079-6772-46c3-9ec3-1e01741a210f-kube-api-access-c9z5w\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700588 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6z72\" (UniqueName: \"kubernetes.io/projected/0297c6e3-62f8-49cc-a073-8bb104949456-kube-api-access-k6z72\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700644 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0297c6e3-62f8-49cc-a073-8bb104949456-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.700695 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63e6079-6772-46c3-9ec3-1e01741a210f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.701430 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b9f9eed-02b1-4541-8ebb-34826639233b" (UID: "3b9f9eed-02b1-4541-8ebb-34826639233b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.701801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" (UID: "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.701930 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00437219-cb6b-48ad-a0cb-d75b82412ba1" (UID: "00437219-cb6b-48ad-a0cb-d75b82412ba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.702257 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9112f9c-911e-47d4-be64-e6f90fa6fa35" (UID: "a9112f9c-911e-47d4-be64-e6f90fa6fa35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.703026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "056813ab-3913-42db-afa1-a79cb8e3a3c9" (UID: "056813ab-3913-42db-afa1-a79cb8e3a3c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.703130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf1f702d-7084-4e85-add9-15c10223d801" (UID: "bf1f702d-7084-4e85-add9-15c10223d801"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.704223 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48" (OuterVolumeSpecName: "kube-api-access-dxc48") pod "00437219-cb6b-48ad-a0cb-d75b82412ba1" (UID: "00437219-cb6b-48ad-a0cb-d75b82412ba1"). InnerVolumeSpecName "kube-api-access-dxc48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.705747 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955" (OuterVolumeSpecName: "kube-api-access-78955") pod "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" (UID: "5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c"). InnerVolumeSpecName "kube-api-access-78955". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.706293 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt" (OuterVolumeSpecName: "kube-api-access-sdhmt") pod "a9112f9c-911e-47d4-be64-e6f90fa6fa35" (UID: "a9112f9c-911e-47d4-be64-e6f90fa6fa35"). InnerVolumeSpecName "kube-api-access-sdhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.706995 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh" (OuterVolumeSpecName: "kube-api-access-hdlzh") pod "bf1f702d-7084-4e85-add9-15c10223d801" (UID: "bf1f702d-7084-4e85-add9-15c10223d801"). InnerVolumeSpecName "kube-api-access-hdlzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.709014 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w" (OuterVolumeSpecName: "kube-api-access-gb22w") pod "056813ab-3913-42db-afa1-a79cb8e3a3c9" (UID: "056813ab-3913-42db-afa1-a79cb8e3a3c9"). InnerVolumeSpecName "kube-api-access-gb22w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.717978 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct" (OuterVolumeSpecName: "kube-api-access-t78ct") pod "3b9f9eed-02b1-4541-8ebb-34826639233b" (UID: "3b9f9eed-02b1-4541-8ebb-34826639233b"). InnerVolumeSpecName "kube-api-access-t78ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.804354 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78955\" (UniqueName: \"kubernetes.io/projected/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-kube-api-access-78955\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805202 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxc48\" (UniqueName: \"kubernetes.io/projected/00437219-cb6b-48ad-a0cb-d75b82412ba1-kube-api-access-dxc48\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805238 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9f9eed-02b1-4541-8ebb-34826639233b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805258 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805276 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00437219-cb6b-48ad-a0cb-d75b82412ba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805292 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056813ab-3913-42db-afa1-a79cb8e3a3c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805309 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9112f9c-911e-47d4-be64-e6f90fa6fa35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805350 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t78ct\" (UniqueName: \"kubernetes.io/projected/3b9f9eed-02b1-4541-8ebb-34826639233b-kube-api-access-t78ct\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805367 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb22w\" (UniqueName: \"kubernetes.io/projected/056813ab-3913-42db-afa1-a79cb8e3a3c9-kube-api-access-gb22w\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805384 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1f702d-7084-4e85-add9-15c10223d801-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805421 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdhmt\" (UniqueName: \"kubernetes.io/projected/a9112f9c-911e-47d4-be64-e6f90fa6fa35-kube-api-access-sdhmt\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:30 crc kubenswrapper[4751]: I0130 21:37:30.805439 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdlzh\" (UniqueName: \"kubernetes.io/projected/bf1f702d-7084-4e85-add9-15c10223d801-kube-api-access-hdlzh\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.008074 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lqv47" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.008086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lqv47" event={"ID":"0297c6e3-62f8-49cc-a073-8bb104949456","Type":"ContainerDied","Data":"e3f3f61970d72f82ffba4d9a80464a10e8b9c51e7583102951e1de7d389e2988"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.008169 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f3f61970d72f82ffba4d9a80464a10e8b9c51e7583102951e1de7d389e2988" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.012176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2gxmh" event={"ID":"bf1f702d-7084-4e85-add9-15c10223d801","Type":"ContainerDied","Data":"f37db6345203c46035cf2c18f2b4711cfad519df170163afdd9effa521c52d7f"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.012220 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37db6345203c46035cf2c18f2b4711cfad519df170163afdd9effa521c52d7f" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.012287 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2gxmh" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.015183 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7f7-account-create-update-d88cz" event={"ID":"a9112f9c-911e-47d4-be64-e6f90fa6fa35","Type":"ContainerDied","Data":"eb7be4270b42ffce4b76f159858fa9a7aa3755769d6109ee488fcb1be59f44c4"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.015232 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7be4270b42ffce4b76f159858fa9a7aa3755769d6109ee488fcb1be59f44c4" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.015316 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7f7-account-create-update-d88cz" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.017904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerStarted","Data":"dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.023167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f07-account-create-update-fr6kw" event={"ID":"5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c","Type":"ContainerDied","Data":"a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.023233 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4945e2a89acdb4e8102ab0d694b98c9850bbfc9f1b5e7a4eafc6349cfa65fb3" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.023208 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f07-account-create-update-fr6kw" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.040538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2618-account-create-update-fdl95" event={"ID":"e63e6079-6772-46c3-9ec3-1e01741a210f","Type":"ContainerDied","Data":"15f677f718a80e7a19e65a1e37fb95a181277e9a1d14c765d1593019e2e9f3c0"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.040580 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f677f718a80e7a19e65a1e37fb95a181277e9a1d14c765d1593019e2e9f3c0" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.040682 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2618-account-create-update-fdl95" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.043435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgsw" event={"ID":"00437219-cb6b-48ad-a0cb-d75b82412ba1","Type":"ContainerDied","Data":"30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.043523 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30cf4d0dacd7fd4fd22cf05d9b571d841adae3d71171a81d492efaee7e75a8c2" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.043671 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgsw" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.049642 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e7af95c-7ba2-4e0b-9947-795d9629744c" containerID="177d01f9395c57a0704f8e3be47f47ddcda9844296cb5595f9c79bfbfade602b" exitCode=0 Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.049765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerDied","Data":"177d01f9395c57a0704f8e3be47f47ddcda9844296cb5595f9c79bfbfade602b"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.051279 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z99cv" podStartSLOduration=2.508373302 podStartE2EDuration="8.051260384s" podCreationTimestamp="2026-01-30 21:37:23 +0000 UTC" firstStartedPulling="2026-01-30 21:37:24.724563784 +0000 UTC m=+1383.470386433" lastFinishedPulling="2026-01-30 21:37:30.267450866 +0000 UTC m=+1389.013273515" observedRunningTime="2026-01-30 21:37:31.046478626 +0000 UTC m=+1389.792301275" watchObservedRunningTime="2026-01-30 21:37:31.051260384 +0000 UTC m=+1389.797083033" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.053090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-31bb-account-create-update-w6h5f" event={"ID":"3b9f9eed-02b1-4541-8ebb-34826639233b","Type":"ContainerDied","Data":"d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.054035 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f85c50ce506d7335344aa42ebdeec0dc51661e674c8577501465433e6f645a" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.053157 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-31bb-account-create-update-w6h5f" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.055829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hr9lv" event={"ID":"056813ab-3913-42db-afa1-a79cb8e3a3c9","Type":"ContainerDied","Data":"e1328141a3657eae04671c0e1b8d5daf9d2fd7acd381b279a7654fac0a691f97"} Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.055868 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1328141a3657eae04671c0e1b8d5daf9d2fd7acd381b279a7654fac0a691f97" Jan 30 21:37:31 crc kubenswrapper[4751]: I0130 21:37:31.055962 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hr9lv" Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.067443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"b7d4458f747dea98872f2e05c84ba42c153f59bc233d9d19cd52f10f61db8075"} Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.602595 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.717969 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:37:32 crc kubenswrapper[4751]: I0130 21:37:32.718253 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" containerID="cri-o://d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e" gracePeriod=10 Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.077907 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerID="d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e" exitCode=0 Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.078233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerDied","Data":"d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e"} Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.265531 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367143 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367293 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367379 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.367509 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") pod \"eb683b6d-9110-46e1-8406-eea86d9cc73b\" (UID: \"eb683b6d-9110-46e1-8406-eea86d9cc73b\") " Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.380418 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9" (OuterVolumeSpecName: "kube-api-access-58ct9") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "kube-api-access-58ct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.421362 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config" (OuterVolumeSpecName: "config") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.431319 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.435835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.443251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb683b6d-9110-46e1-8406-eea86d9cc73b" (UID: "eb683b6d-9110-46e1-8406-eea86d9cc73b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469533 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469566 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58ct9\" (UniqueName: \"kubernetes.io/projected/eb683b6d-9110-46e1-8406-eea86d9cc73b-kube-api-access-58ct9\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469578 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469588 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:33 crc kubenswrapper[4751]: I0130 21:37:33.469596 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb683b6d-9110-46e1-8406-eea86d9cc73b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.114187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" event={"ID":"eb683b6d-9110-46e1-8406-eea86d9cc73b","Type":"ContainerDied","Data":"91fac1793a7a2b8a269edafca995d78c1aceb7914291bdd22c295ca0ed226b45"} Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.114236 4751 scope.go:117] "RemoveContainer" containerID="d13bdba61d4e84c62b4410d765f4f99e77b7c81d9c8b2fd1ad7ff51b9c7b511e" Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.114298 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4dbml" Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.145974 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.157606 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4dbml"] Jan 30 21:37:34 crc kubenswrapper[4751]: I0130 21:37:34.159351 4751 scope.go:117] "RemoveContainer" containerID="ed1388c6eb28c157030933478df87642f4fba3d9c198c284f1958d42816f2e6a" Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.126678 4751 generic.go:334] "Generic (PLEG): container finished" podID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerID="dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff" exitCode=0 Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.126771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerDied","Data":"dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff"} Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.131788 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"60cd133862e242523da1850cf2668498e42b1be6ab5b9c3500e00cd6db2d11b9"} Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.131810 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e7af95c-7ba2-4e0b-9947-795d9629744c","Type":"ContainerStarted","Data":"bcfa0cad1d668d9d2687e92590ddc5f1eeb541c940e42b19186ed9cd5552b432"} Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.196796 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.196769484 podStartE2EDuration="19.196769484s" podCreationTimestamp="2026-01-30 21:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:35.185511523 +0000 UTC m=+1393.931334172" watchObservedRunningTime="2026-01-30 21:37:35.196769484 +0000 UTC m=+1393.942592173" Jan 30 21:37:35 crc kubenswrapper[4751]: I0130 21:37:35.999124 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" path="/var/lib/kubelet/pods/eb683b6d-9110-46e1-8406-eea86d9cc73b/volumes" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.590734 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.735027 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") pod \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.735531 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") pod \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.735785 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") pod \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\" (UID: \"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1\") " Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.744667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69" (OuterVolumeSpecName: "kube-api-access-2rr69") pod "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" (UID: "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1"). InnerVolumeSpecName "kube-api-access-2rr69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.770827 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" (UID: "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.802229 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data" (OuterVolumeSpecName: "config-data") pod "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" (UID: "0bc5d80d-ae17-431d-8e0f-6003af0fa6b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.838239 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.838471 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rr69\" (UniqueName: \"kubernetes.io/projected/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-kube-api-access-2rr69\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:36 crc kubenswrapper[4751]: I0130 21:37:36.838548 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.036826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.155457 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z99cv" event={"ID":"0bc5d80d-ae17-431d-8e0f-6003af0fa6b1","Type":"ContainerDied","Data":"217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c"} Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.155513 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="217787355da3f9922a4df5837ffe3d8b89b3671ee38fbdd59e2d1b3a45833a3c" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.155565 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z99cv" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474115 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474763 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1f702d-7084-4e85-add9-15c10223d801" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474781 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1f702d-7084-4e85-add9-15c10223d801" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474803 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474811 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerName="keystone-db-sync" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474817 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerName="keystone-db-sync" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474838 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474843 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474858 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474864 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474873 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474879 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474892 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474898 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474907 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474913 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474926 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474931 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474938 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474944 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474950 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474956 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: E0130 21:37:37.474971 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.474976 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475144 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475157 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7876a87-ce9e-4d67-a296-cfe228be3d3e" containerName="init" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475167 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb683b6d-9110-46e1-8406-eea86d9cc73b" containerName="dnsmasq-dns" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475177 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475186 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475196 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1f702d-7084-4e85-add9-15c10223d801" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475208 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475219 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475228 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" containerName="keystone-db-sync" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475240 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" containerName="mariadb-account-create-update" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475249 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" containerName="mariadb-database-create" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.475926 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.480232 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.480650 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.480868 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.481054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.481293 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.487240 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.501973 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.511774 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.517154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553076 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553381 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553751 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.553844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.641917 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.645493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.650236 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q572p" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.650532 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.656902 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.657064 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.657152 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.657997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658168 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658290 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658701 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.658787 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.665932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.669239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.673538 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.675440 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.695684 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.697986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.705972 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"keystone-bootstrap-tpqxs\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766345 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766406 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766431 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766459 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766476 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.766604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.767642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.768289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.769026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.769723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.770357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.793218 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.795042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.805055 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.805334 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6hfgl" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.805485 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.812808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.836456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"dnsmasq-dns-bbf5cc879-64gs8\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.837975 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873391 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873586 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.873632 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.879635 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.896132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.897157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.899834 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.901216 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.908735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.935573 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.935792 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-s756f" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.959844 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"heat-db-sync-npwgd\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.974973 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975066 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975124 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975221 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.975258 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.985006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:37 crc kubenswrapper[4751]: I0130 21:37:37.993401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.079777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"neutron-db-sync-lwm4t\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081261 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.081288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.119563 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.119609 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.122918 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.126949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.085096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.128021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.142852 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.152670 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8smxc" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.153810 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.155569 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.163858 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.210049 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.214519 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.221166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"cinder-db-sync-bq6lp\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233046 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233176 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233231 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.233454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.297260 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.297708 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.310315 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.311555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.320983 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrrpx" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.338014 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.339025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.339109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.341340 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.341393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.341408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.342734 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.351796 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.353770 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.354030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.364658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.372045 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.381058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"placement-db-sync-v9spg\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.389102 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.390846 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.423141 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.436070 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.443991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.444165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.444199 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.445025 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.445389 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.449865 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.465314 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.468751 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.553722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554081 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554102 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554169 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554194 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.554418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.563974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.569481 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.584234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"barbican-db-sync-rt7v2\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.636665 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.647526 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670245 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670356 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670385 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670511 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670573 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.670760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.673585 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.674216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.676849 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.684415 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689633 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.689651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.690175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.696439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.698753 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.703201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.703442 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.704944 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.715866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.725602 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qdcvb" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.727450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.728185 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"dnsmasq-dns-56df8fb6b7-5x59j\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.737233 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.768633 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.770393 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.792255 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.792527 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.795716 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"ceilometer-0\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " pod="openstack/ceilometer-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.796001 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878391 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878562 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878808 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878885 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.878901 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.880733 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982370 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982409 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982512 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982631 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982688 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.982832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.983508 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.987756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.988362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.995197 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.995234 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1439638cb8026f3fbd74a1d30ab35170ee3b35899e999b31e76311ef8605b4f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.995922 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:37:38 crc kubenswrapper[4751]: I0130 21:37:38.996027 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd5683b2fac8da06378b2d5eb72c7d0b6faa54e75d4b318b8013499a38483353/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.001991 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.002767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.011255 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.028529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.028691 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.028773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.029102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.033454 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.036215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.037036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.043307 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.076073 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.098893 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.102610 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.111019 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.145008 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.211297 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.260107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerStarted","Data":"61ca5d5ef115983d440cf3f223c2366b80c380ae13e04f479bd30ca5a18ae1d4"} Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.266765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" event={"ID":"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a","Type":"ContainerStarted","Data":"f03447e044ec076808966e68051e046395f76c21c7d74f8ecac4fe5c4986784d"} Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.275550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerStarted","Data":"735b101ef3e18e05703b0e39bfc247d4eafc863fa5ee869699895513af302ba8"} Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.477454 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:37:39 crc kubenswrapper[4751]: W0130 21:37:39.506845 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42b4031_ca3e_4b28_b62a_eb346132dc3a.slice/crio-8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707 WatchSource:0}: Error finding container 8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707: Status 404 returned error can't find the container with id 8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707 Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.553274 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.564268 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:37:39 crc kubenswrapper[4751]: W0130 21:37:39.658420 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda90f6a78_a996_49f8_a567_d2699c737d1f.slice/crio-0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e WatchSource:0}: Error finding container 0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e: Status 404 returned error can't find the container with id 0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.658899 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.778631 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:37:39 crc kubenswrapper[4751]: I0130 21:37:39.952502 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.084517 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.176420 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.278394 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.296529 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerStarted","Data":"ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.296584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerStarted","Data":"d95ab28a617fe672cc0a279ffe8dbf4ffcdc0187184b974eff4f150e1720495f"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.306302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerStarted","Data":"3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.306357 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerStarted","Data":"8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.318840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerStarted","Data":"7502198a116b3f2771f1ab3c57c8008044d28b3101423ffa202d372d5ac52b80"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.333070 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerID="45febbc6f670463da13688cdf32eacffd29dea9d71d9b8485fc96c2e17071d4a" exitCode=0 Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.333129 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" event={"ID":"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a","Type":"ContainerDied","Data":"45febbc6f670463da13688cdf32eacffd29dea9d71d9b8485fc96c2e17071d4a"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.334702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerStarted","Data":"d48985825eedc61af18140d62898c9c9236f51e33569add314f3a9440bbd00d5"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.339544 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerStarted","Data":"0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.399186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerStarted","Data":"7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.414378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"f50cd608a1a18449e64efb07caa3e3fd54b436a099efe0495f393f4382e9ab10"} Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.433178 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lwm4t" podStartSLOduration=3.433160725 podStartE2EDuration="3.433160725s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:40.396880407 +0000 UTC m=+1399.142703056" watchObservedRunningTime="2026-01-30 21:37:40.433160725 +0000 UTC m=+1399.178983374" Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.489588 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tpqxs" podStartSLOduration=3.48956547 podStartE2EDuration="3.48956547s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:40.475877665 +0000 UTC m=+1399.221700314" watchObservedRunningTime="2026-01-30 21:37:40.48956547 +0000 UTC m=+1399.235388119" Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.551024 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: I0130 21:37:40.748455 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:40 crc kubenswrapper[4751]: W0130 21:37:40.753987 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ae4a36d_9cf4_40cd_aa7a_6da368242040.slice/crio-753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97 WatchSource:0}: Error finding container 753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97: Status 404 returned error can't find the container with id 753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97 Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.339492 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438481 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438518 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.438768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") pod \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\" (UID: \"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a\") " Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.457686 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx" (OuterVolumeSpecName: "kube-api-access-rtdkx") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "kube-api-access-rtdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.481107 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.484383 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config" (OuterVolumeSpecName: "config") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.506018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.506888 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.507022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-64gs8" event={"ID":"cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a","Type":"ContainerDied","Data":"f03447e044ec076808966e68051e046395f76c21c7d74f8ecac4fe5c4986784d"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.507091 4751 scope.go:117] "RemoveContainer" containerID="45febbc6f670463da13688cdf32eacffd29dea9d71d9b8485fc96c2e17071d4a" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.508183 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.510184 4751 generic.go:334] "Generic (PLEG): container finished" podID="714bda18-396a-4c61-b32c-28c97f9212c7" containerID="ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0" exitCode=0 Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.510234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerDied","Data":"ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.514232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" (UID: "cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.520287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerStarted","Data":"e11c96cff5a330d983c0e1dc8367150a90bc8159e4fde7a3ae81c0c2e9080bec"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.542167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerStarted","Data":"753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97"} Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545726 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545758 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545767 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545778 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545788 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdkx\" (UniqueName: \"kubernetes.io/projected/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-kube-api-access-rtdkx\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.545796 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.940534 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:41 crc kubenswrapper[4751]: I0130 21:37:41.950007 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-64gs8"] Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.005142 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" path="/var/lib/kubelet/pods/cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a/volumes" Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.570441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerStarted","Data":"d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3"} Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.570977 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.574992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerStarted","Data":"119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a"} Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.587079 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerStarted","Data":"8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972"} Jan 30 21:37:42 crc kubenswrapper[4751]: I0130 21:37:42.601896 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" podStartSLOduration=4.601881721 podStartE2EDuration="4.601881721s" podCreationTimestamp="2026-01-30 21:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:42.600862294 +0000 UTC m=+1401.346684943" watchObservedRunningTime="2026-01-30 21:37:42.601881721 +0000 UTC m=+1401.347704370" Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.620841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerStarted","Data":"f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8"} Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.620934 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" containerID="cri-o://119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.621028 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" containerID="cri-o://f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.623037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerStarted","Data":"4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152"} Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.623191 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" containerID="cri-o://8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.623260 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" containerID="cri-o://4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152" gracePeriod=30 Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.646629 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.646589298 podStartE2EDuration="7.646589298s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:44.645063077 +0000 UTC m=+1403.390885726" watchObservedRunningTime="2026-01-30 21:37:44.646589298 +0000 UTC m=+1403.392411957" Jan 30 21:37:44 crc kubenswrapper[4751]: I0130 21:37:44.674627 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.674604616 podStartE2EDuration="7.674604616s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:37:44.672173781 +0000 UTC m=+1403.417996430" watchObservedRunningTime="2026-01-30 21:37:44.674604616 +0000 UTC m=+1403.420427265" Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.638917 4751 generic.go:334] "Generic (PLEG): container finished" podID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerID="7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad" exitCode=0 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.639132 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerDied","Data":"7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642649 4751 generic.go:334] "Generic (PLEG): container finished" podID="216704d4-5c21-497f-95b7-1e882daec251" containerID="f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8" exitCode=0 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642672 4751 generic.go:334] "Generic (PLEG): container finished" podID="216704d4-5c21-497f-95b7-1e882daec251" containerID="119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a" exitCode=143 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerDied","Data":"f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.642723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerDied","Data":"119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645019 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerID="4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152" exitCode=0 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645045 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerID="8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972" exitCode=143 Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645065 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerDied","Data":"4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152"} Jan 30 21:37:45 crc kubenswrapper[4751]: I0130 21:37:45.645088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerDied","Data":"8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972"} Jan 30 21:37:47 crc kubenswrapper[4751]: I0130 21:37:47.036928 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:47 crc kubenswrapper[4751]: I0130 21:37:47.047848 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:47 crc kubenswrapper[4751]: I0130 21:37:47.669671 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.030490 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.114090 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.120795 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" containerID="cri-o://5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612" gracePeriod=10 Jan 30 21:37:49 crc kubenswrapper[4751]: E0130 21:37:49.370166 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ac1bbe_c547_456d_8b0a_0c29a877775c.slice/crio-5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.685984 4751 generic.go:334] "Generic (PLEG): container finished" podID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerID="5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612" exitCode=0 Jan 30 21:37:49 crc kubenswrapper[4751]: I0130 21:37:49.686027 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerDied","Data":"5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612"} Jan 30 21:37:52 crc kubenswrapper[4751]: I0130 21:37:52.600396 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.024793 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:37:56 crc kubenswrapper[4751]: E0130 21:37:56.025777 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerName="init" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.025792 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerName="init" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.026017 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9db1fc-8dff-4d85-9c5b-aee2c3757b6a" containerName="init" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.027834 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.063340 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.101567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.101671 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.101865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.203844 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.236666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"redhat-operators-586n4\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:56 crc kubenswrapper[4751]: I0130 21:37:56.354748 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:37:57 crc kubenswrapper[4751]: I0130 21:37:57.601305 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.059190 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.065091 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.079165 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.165969 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166043 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166125 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166185 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166220 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.166262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167566 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167604 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167621 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") pod \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\" (UID: \"a83a8c4a-677e-4481-b671-f5fa6edadb5f\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.167765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\" (UID: \"4ae4a36d-9cf4-40cd-aa7a-6da368242040\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.168362 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs" (OuterVolumeSpecName: "logs") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.168867 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.169500 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.169716 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ae4a36d-9cf4-40cd-aa7a-6da368242040-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.174185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts" (OuterVolumeSpecName: "scripts") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.174448 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf" (OuterVolumeSpecName: "kube-api-access-sbsxf") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "kube-api-access-sbsxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.174889 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.175275 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4" (OuterVolumeSpecName: "kube-api-access-v7sv4") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "kube-api-access-v7sv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.177921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.205485 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts" (OuterVolumeSpecName: "scripts") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.231580 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.236700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data" (OuterVolumeSpecName: "config-data") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.238263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (OuterVolumeSpecName: "glance") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "pvc-2b6fe968-3470-4548-ade6-9a3644e74227". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.258772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a83a8c4a-677e-4481-b671-f5fa6edadb5f" (UID: "a83a8c4a-677e-4481-b671-f5fa6edadb5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.272465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.272549 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.274750 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.274847 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.275673 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.275724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.275766 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.276498 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.276722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs" (OuterVolumeSpecName: "logs") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.276849 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") pod \"216704d4-5c21-497f-95b7-1e882daec251\" (UID: \"216704d4-5c21-497f-95b7-1e882daec251\") " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278271 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7sv4\" (UniqueName: \"kubernetes.io/projected/4ae4a36d-9cf4-40cd-aa7a-6da368242040-kube-api-access-v7sv4\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278290 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278354 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278367 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278378 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216704d4-5c21-497f-95b7-1e882daec251-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278429 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278440 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbsxf\" (UniqueName: \"kubernetes.io/projected/a83a8c4a-677e-4481-b671-f5fa6edadb5f-kube-api-access-sbsxf\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278449 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.278458 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.280735 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.280754 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.280762 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83a8c4a-677e-4481-b671-f5fa6edadb5f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.283511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts" (OuterVolumeSpecName: "scripts") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.283580 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr" (OuterVolumeSpecName: "kube-api-access-79rgr") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "kube-api-access-79rgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.304531 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data" (OuterVolumeSpecName: "config-data") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.308581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (OuterVolumeSpecName: "glance") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "pvc-03216ddc-ff0c-4c63-8e03-12380926233a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.311344 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ae4a36d-9cf4-40cd-aa7a-6da368242040" (UID: "4ae4a36d-9cf4-40cd-aa7a-6da368242040"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.319888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.328030 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.328179 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227") on node "crc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.338937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.346614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data" (OuterVolumeSpecName: "config-data") pod "216704d4-5c21-497f-95b7-1e882daec251" (UID: "216704d4-5c21-497f-95b7-1e882daec251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382763 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" " Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382806 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382818 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382828 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79rgr\" (UniqueName: \"kubernetes.io/projected/216704d4-5c21-497f-95b7-1e882daec251-kube-api-access-79rgr\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382847 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382858 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382867 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae4a36d-9cf4-40cd-aa7a-6da368242040-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382874 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.382883 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216704d4-5c21-497f-95b7-1e882daec251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.406169 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.406413 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a") on node "crc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.483835 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" DevicePath \"\"" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.803307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ae4a36d-9cf4-40cd-aa7a-6da368242040","Type":"ContainerDied","Data":"753c93823a40b1ea10f438daff89be405292baad01f508a3efec7b992c676a97"} Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.803381 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.803404 4751 scope.go:117] "RemoveContainer" containerID="4c18d2acdcf0687ee372f2e7681595403ba57b693482cb625c56dc4593cc7152" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.813041 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpqxs" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.817931 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpqxs" event={"ID":"a83a8c4a-677e-4481-b671-f5fa6edadb5f","Type":"ContainerDied","Data":"735b101ef3e18e05703b0e39bfc247d4eafc863fa5ee869699895513af302ba8"} Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.817989 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735b101ef3e18e05703b0e39bfc247d4eafc863fa5ee869699895513af302ba8" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.828687 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216704d4-5c21-497f-95b7-1e882daec251","Type":"ContainerDied","Data":"e11c96cff5a330d983c0e1dc8367150a90bc8159e4fde7a3ae81c0c2e9080bec"} Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.828786 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.854193 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.862709 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.894637 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895157 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895177 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895200 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerName="keystone-bootstrap" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895217 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerName="keystone-bootstrap" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895237 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895244 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895258 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895266 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: E0130 21:37:59.895274 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895280 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895500 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" containerName="keystone-bootstrap" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895524 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895533 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="216704d4-5c21-497f-95b7-1e882daec251" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895545 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-log" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.895557 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" containerName="glance-httpd" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.896875 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.901344 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.901591 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.902132 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qdcvb" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.902244 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.912172 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.923172 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.932637 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.938237 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.940425 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.945634 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.946087 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 21:37:59 crc kubenswrapper[4751]: I0130 21:37:59.949818 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.009457 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216704d4-5c21-497f-95b7-1e882daec251" path="/var/lib/kubelet/pods/216704d4-5c21-497f-95b7-1e882daec251/volumes" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.011156 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae4a36d-9cf4-40cd-aa7a-6da368242040" path="/var/lib/kubelet/pods/4ae4a36d-9cf4-40cd-aa7a-6da368242040/volumes" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.105623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106471 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106596 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106753 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106825 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.106997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107280 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107470 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107605 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107681 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107757 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.107891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.211850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212270 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.212877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213226 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.213682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.214362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.222276 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.226054 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.226137 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.226458 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.227560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.235081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.235651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.235988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.236372 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.238946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.250043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.253015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.265234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.307395 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.324079 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tpqxs"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.349588 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.349633 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd5683b2fac8da06378b2d5eb72c7d0b6faa54e75d4b318b8013499a38483353/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.355613 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.355660 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1439638cb8026f3fbd74a1d30ab35170ee3b35899e999b31e76311ef8605b4f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.412278 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.413697 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.415203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421353 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421483 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421532 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421672 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421716 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.421685 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.424536 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518591 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.518939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.519143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.519447 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.587155 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.603366 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621672 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621711 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.621819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.627155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.627560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.628337 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.628397 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.628668 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.643885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"keystone-bootstrap-89chj\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:00 crc kubenswrapper[4751]: I0130 21:38:00.785319 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:01 crc kubenswrapper[4751]: I0130 21:38:01.991583 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83a8c4a-677e-4481-b671-f5fa6edadb5f" path="/var/lib/kubelet/pods/a83a8c4a-677e-4481-b671-f5fa6edadb5f/volumes" Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.600823 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.601723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.913971 4751 generic.go:334] "Generic (PLEG): container finished" podID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerID="3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0" exitCode=0 Jan 30 21:38:07 crc kubenswrapper[4751]: I0130 21:38:07.914017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerDied","Data":"3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0"} Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.624530 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.624786 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf8hcchd4h5dh69h58ch644h55dh655h559h88hd6h5bfh688h89hf5h5f4h598h646h56bh658h544h5c4h84h55fh545h678hc7h56h657h5d7h55cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x98w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(36866d1c-b1a0-4d3e-a87f-f5901b053bb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.934600 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.934750 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxctb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-npwgd_openstack(1051dd3c-5d30-47f1-8162-3a3e9d5ee271): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:38:08 crc kubenswrapper[4751]: E0130 21:38:08.935931 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-npwgd" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.099688 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180043 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180111 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180207 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180395 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.180739 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") pod \"37ac1bbe-c547-456d-8b0a-0c29a877775c\" (UID: \"37ac1bbe-c547-456d-8b0a-0c29a877775c\") " Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.200501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp" (OuterVolumeSpecName: "kube-api-access-9rqmp") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "kube-api-access-9rqmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.243567 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.251876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.253637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.260904 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config" (OuterVolumeSpecName: "config") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.278176 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37ac1bbe-c547-456d-8b0a-0c29a877775c" (UID: "37ac1bbe-c547-456d-8b0a-0c29a877775c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282763 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282796 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282806 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282815 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282827 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37ac1bbe-c547-456d-8b0a-0c29a877775c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.282837 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rqmp\" (UniqueName: \"kubernetes.io/projected/37ac1bbe-c547-456d-8b0a-0c29a877775c-kube-api-access-9rqmp\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.936219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" event={"ID":"37ac1bbe-c547-456d-8b0a-0c29a877775c","Type":"ContainerDied","Data":"6864e85d2504e6732a265d6ea2bacb5cab1c5dcba817c3a3b4ad3a6ad9332eef"} Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.936250 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" Jan 30 21:38:09 crc kubenswrapper[4751]: E0130 21:38:09.939584 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-npwgd" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" Jan 30 21:38:09 crc kubenswrapper[4751]: I0130 21:38:09.997025 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.006819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-jnhv7"] Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.338077 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.338602 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xqxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-bq6lp_openstack(564f3d8f-4b9f-4fe2-9464-baa31d6b7d24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.339872 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-bq6lp" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.375946 4751 scope.go:117] "RemoveContainer" containerID="8c7c23c935d0608587f83d483f2b18799c5f2ff038edce5ec3fb7b909b434972" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.681319 4751 scope.go:117] "RemoveContainer" containerID="f55630451a66194662e90f8ccee31ad40c7dd8e161f684fe03ebc06b15f136e8" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.812800 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.841362 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") pod \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.841404 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") pod \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.841488 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") pod \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\" (UID: \"d42b4031-ca3e-4b28-b62a-eb346132dc3a\") " Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.868378 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7" (OuterVolumeSpecName: "kube-api-access-ghgm7") pod "d42b4031-ca3e-4b28-b62a-eb346132dc3a" (UID: "d42b4031-ca3e-4b28-b62a-eb346132dc3a"). InnerVolumeSpecName "kube-api-access-ghgm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.913784 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42b4031-ca3e-4b28-b62a-eb346132dc3a" (UID: "d42b4031-ca3e-4b28-b62a-eb346132dc3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.929994 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config" (OuterVolumeSpecName: "config") pod "d42b4031-ca3e-4b28-b62a-eb346132dc3a" (UID: "d42b4031-ca3e-4b28-b62a-eb346132dc3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.941277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.943639 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.943666 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgm7\" (UniqueName: \"kubernetes.io/projected/d42b4031-ca3e-4b28-b62a-eb346132dc3a-kube-api-access-ghgm7\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.943679 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42b4031-ca3e-4b28-b62a-eb346132dc3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.967054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lwm4t" event={"ID":"d42b4031-ca3e-4b28-b62a-eb346132dc3a","Type":"ContainerDied","Data":"8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707"} Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.967105 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fffc2cd09b32d538d073fe7efb076cbe727bad01ceac9607f3b182ea08e3707" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.967181 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lwm4t" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.977594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerStarted","Data":"b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4"} Jan 30 21:38:10 crc kubenswrapper[4751]: E0130 21:38:10.979548 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-bq6lp" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" Jan 30 21:38:10 crc kubenswrapper[4751]: I0130 21:38:10.994357 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v9spg" podStartSLOduration=4.619046591 podStartE2EDuration="33.994339388s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.55350511 +0000 UTC m=+1398.299327759" lastFinishedPulling="2026-01-30 21:38:08.928797907 +0000 UTC m=+1427.674620556" observedRunningTime="2026-01-30 21:38:10.993849695 +0000 UTC m=+1429.739672344" watchObservedRunningTime="2026-01-30 21:38:10.994339388 +0000 UTC m=+1429.740162037" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.037816 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rt7v2" podStartSLOduration=2.379304687 podStartE2EDuration="33.037797002s" podCreationTimestamp="2026-01-30 21:37:38 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.661956664 +0000 UTC m=+1398.407779313" lastFinishedPulling="2026-01-30 21:38:10.320448949 +0000 UTC m=+1429.066271628" observedRunningTime="2026-01-30 21:38:11.026796377 +0000 UTC m=+1429.772619036" watchObservedRunningTime="2026-01-30 21:38:11.037797002 +0000 UTC m=+1429.783619651" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.103839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.180651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:38:11 crc kubenswrapper[4751]: W0130 21:38:11.242374 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4c714e3_2147_4f8a_97cd_2e62e0f3a955.slice/crio-e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b WatchSource:0}: Error finding container e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b: Status 404 returned error can't find the container with id e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b Jan 30 21:38:11 crc kubenswrapper[4751]: W0130 21:38:11.245163 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf42767ff_b1d3_49e9_8b8d_39c65ea98978.slice/crio-e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1 WatchSource:0}: Error finding container e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1: Status 404 returned error can't find the container with id e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1 Jan 30 21:38:11 crc kubenswrapper[4751]: W0130 21:38:11.246718 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58e79616_9b52_47f9_a43e_01cbd487fbbd.slice/crio-c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d WatchSource:0}: Error finding container c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d: Status 404 returned error can't find the container with id c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.267108 4751 scope.go:117] "RemoveContainer" containerID="119884ecc859ddc20e43a24694f7a4c243d3d0650e6821ce6c5c66516d15e09a" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.317482 4751 scope.go:117] "RemoveContainer" containerID="5964e334ee213f037ca3d06aae948d2d9e897aa60cd6fd6594177910b8efb612" Jan 30 21:38:11 crc kubenswrapper[4751]: I0130 21:38:11.338929 4751 scope.go:117] "RemoveContainer" containerID="3fe25ad7467fd6a359800e1d2c4132e606e75ea363c0815021b0fb7427ca7b89" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.050031 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" path="/var/lib/kubelet/pods/37ac1bbe-c547-456d-8b0a-0c29a877775c/volumes" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.074907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerStarted","Data":"1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.074947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerStarted","Data":"e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.093225 4751 generic.go:334] "Generic (PLEG): container finished" podID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerID="e023642d7e8f9f5527a83bfc616f033c2d4851bd320c9d6b4ef572caee21ef7c" exitCode=0 Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.093338 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"e023642d7e8f9f5527a83bfc616f033c2d4851bd320c9d6b4ef572caee21ef7c"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.093372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerStarted","Data":"e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.120471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerStarted","Data":"2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204300 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:12 crc kubenswrapper[4751]: E0130 21:38:12.204811 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204824 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" Jan 30 21:38:12 crc kubenswrapper[4751]: E0130 21:38:12.204836 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="init" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204842 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="init" Jan 30 21:38:12 crc kubenswrapper[4751]: E0130 21:38:12.204865 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerName="neutron-db-sync" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.204873 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerName="neutron-db-sync" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.205426 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.205456 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" containerName="neutron-db-sync" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.206755 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.217553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerStarted","Data":"c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.224663 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.243627 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5"} Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.286870 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.291631 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.293111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.293271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.297047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.297229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.297528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.333388 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.335163 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.336938 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-89chj" podStartSLOduration=12.336919686 podStartE2EDuration="12.336919686s" podCreationTimestamp="2026-01-30 21:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:12.25231626 +0000 UTC m=+1430.998138909" watchObservedRunningTime="2026-01-30 21:38:12.336919686 +0000 UTC m=+1431.082742335" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.344866 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.345077 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6hfgl" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.345179 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.345285 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.368892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.435970 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.450770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.450994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.451949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.452304 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.452554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.452925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.453429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.528169 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"dnsmasq-dns-6b7b667979-trt9f\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.563406 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.603680 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-jnhv7" podUID="37ac1bbe-c547-456d-8b0a-0c29a877775c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.610289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664817 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664842 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664902 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.664931 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.670345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.675398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.687517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.689939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.696920 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"neutron-566dccff6-ddvxf\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:12 crc kubenswrapper[4751]: I0130 21:38:12.778316 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.266100 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.315229 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerStarted","Data":"63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031"} Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.319602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerStarted","Data":"0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954"} Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.319638 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerStarted","Data":"ddb9f9108d0450b2b505f7e37bbbf5c491b44e23c17e0903abd3c8bd376265b3"} Jan 30 21:38:13 crc kubenswrapper[4751]: I0130 21:38:13.733116 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.330978 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerStarted","Data":"25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.332679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerStarted","Data":"2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.332700 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerStarted","Data":"bff81fd2907d366a655d26ebdf3a255c3bffa93ae91269d7fa674f369fb98f34"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.333897 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerID="a53508f839991fbdcbf4f267b810010ea1fe74ec4a4881adda9c8e4964af9678" exitCode=0 Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.333935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerDied","Data":"a53508f839991fbdcbf4f267b810010ea1fe74ec4a4881adda9c8e4964af9678"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.333950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerStarted","Data":"076d7d4d931c2808d030541aca7f47cf2bc19d9d8238d02afb1c64b4babe3a92"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.336435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerStarted","Data":"38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d"} Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.432899 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.432881581 podStartE2EDuration="15.432881581s" podCreationTimestamp="2026-01-30 21:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:14.42796285 +0000 UTC m=+1433.173785499" watchObservedRunningTime="2026-01-30 21:38:14.432881581 +0000 UTC m=+1433.178704230" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.735717 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.739260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.743675 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.744346 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.828447 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872531 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872785 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.872803 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980936 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.980959 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.981010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.981113 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.981152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.990674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:14 crc kubenswrapper[4751]: I0130 21:38:14.993166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.010317 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.010480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.010808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.011073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.014034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"neutron-5db486d6f7-9jq9s\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.068807 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:15 crc kubenswrapper[4751]: I0130 21:38:15.348952 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerStarted","Data":"404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f"} Jan 30 21:38:18 crc kubenswrapper[4751]: I0130 21:38:18.403571 4751 generic.go:334] "Generic (PLEG): container finished" podID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerID="25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45" exitCode=0 Jan 30 21:38:18 crc kubenswrapper[4751]: I0130 21:38:18.403646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45"} Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.437792 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerStarted","Data":"e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236"} Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.438645 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.461935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerStarted","Data":"74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0"} Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.462107 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.472164 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" podStartSLOduration=8.472140959 podStartE2EDuration="8.472140959s" podCreationTimestamp="2026-01-30 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:20.46769866 +0000 UTC m=+1439.213521389" watchObservedRunningTime="2026-01-30 21:38:20.472140959 +0000 UTC m=+1439.217963618" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.516503 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.516478816 podStartE2EDuration="21.516478816s" podCreationTimestamp="2026-01-30 21:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:20.502290636 +0000 UTC m=+1439.248113285" watchObservedRunningTime="2026-01-30 21:38:20.516478816 +0000 UTC m=+1439.262301495" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.556099 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-566dccff6-ddvxf" podStartSLOduration=8.556073347 podStartE2EDuration="8.556073347s" podCreationTimestamp="2026-01-30 21:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:20.53864238 +0000 UTC m=+1439.284465029" watchObservedRunningTime="2026-01-30 21:38:20.556073347 +0000 UTC m=+1439.301896016" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.587594 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.587657 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.604101 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.604145 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.635980 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.649143 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.649732 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:20 crc kubenswrapper[4751]: I0130 21:38:20.656536 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.473513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.474219 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.474244 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:38:21 crc kubenswrapper[4751]: I0130 21:38:21.474550 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:22 crc kubenswrapper[4751]: I0130 21:38:22.480917 4751 generic.go:334] "Generic (PLEG): container finished" podID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerID="2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094" exitCode=0 Jan 30 21:38:22 crc kubenswrapper[4751]: I0130 21:38:22.481587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerDied","Data":"2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094"} Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.494951 4751 generic.go:334] "Generic (PLEG): container finished" podID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerID="b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4" exitCode=0 Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.495259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerDied","Data":"b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4"} Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.498936 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerID="1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8" exitCode=0 Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499002 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499011 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499321 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499546 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:23 crc kubenswrapper[4751]: I0130 21:38:23.499409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerDied","Data":"1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.091472 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.126890 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.127211 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.227922 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.227976 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228078 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228150 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") pod \"8555e0d7-6d06-4edb-b463-86f7bf829949\" (UID: \"8555e0d7-6d06-4edb-b463-86f7bf829949\") " Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.228653 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs" (OuterVolumeSpecName: "logs") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.229205 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8555e0d7-6d06-4edb-b463-86f7bf829949-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.233437 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn" (OuterVolumeSpecName: "kube-api-access-wb4qn") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "kube-api-access-wb4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.247458 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts" (OuterVolumeSpecName: "scripts") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.270511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data" (OuterVolumeSpecName: "config-data") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.270525 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8555e0d7-6d06-4edb-b463-86f7bf829949" (UID: "8555e0d7-6d06-4edb-b463-86f7bf829949"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337624 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb4qn\" (UniqueName: \"kubernetes.io/projected/8555e0d7-6d06-4edb-b463-86f7bf829949-kube-api-access-wb4qn\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337660 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337677 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.337686 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8555e0d7-6d06-4edb-b463-86f7bf829949-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.447572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.478900 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.506987 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.510073 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: W0130 21:38:24.511557 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b32add6_b9f7_4e57_9dc8_ea71dbc40276.slice/crio-830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925 WatchSource:0}: Error finding container 830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925: Status 404 returned error can't find the container with id 830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925 Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.512151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.519846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerStarted","Data":"21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.524533 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v9spg" event={"ID":"8555e0d7-6d06-4edb-b463-86f7bf829949","Type":"ContainerDied","Data":"7502198a116b3f2771f1ab3c57c8008044d28b3101423ffa202d372d5ac52b80"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.524551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v9spg" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.524565 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7502198a116b3f2771f1ab3c57c8008044d28b3101423ffa202d372d5ac52b80" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.528591 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.529522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerStarted","Data":"5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97"} Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.603005 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-npwgd" podStartSLOduration=2.6926212449999998 podStartE2EDuration="47.602984015s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.156456604 +0000 UTC m=+1397.902279243" lastFinishedPulling="2026-01-30 21:38:24.066819364 +0000 UTC m=+1442.812642013" observedRunningTime="2026-01-30 21:38:24.565383807 +0000 UTC m=+1443.311206456" watchObservedRunningTime="2026-01-30 21:38:24.602984015 +0000 UTC m=+1443.348806664" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.662117 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-586n4" podStartSLOduration=17.688579625 podStartE2EDuration="29.662100178s" podCreationTimestamp="2026-01-30 21:37:55 +0000 UTC" firstStartedPulling="2026-01-30 21:38:12.095149871 +0000 UTC m=+1430.840972520" lastFinishedPulling="2026-01-30 21:38:24.068670424 +0000 UTC m=+1442.814493073" observedRunningTime="2026-01-30 21:38:24.622482127 +0000 UTC m=+1443.368304776" watchObservedRunningTime="2026-01-30 21:38:24.662100178 +0000 UTC m=+1443.407922827" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.728638 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:38:24 crc kubenswrapper[4751]: E0130 21:38:24.729132 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerName="placement-db-sync" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.729148 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerName="placement-db-sync" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.729372 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" containerName="placement-db-sync" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.731417 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.733180 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.734355 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8smxc" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.747146 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.747202 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.747342 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.749879 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.853909 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957118 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957476 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957549 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957607 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.957728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.960915 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.962873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.963994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.976919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.979940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.981666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:24 crc kubenswrapper[4751]: I0130 21:38:24.981847 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"placement-5486cc9958-dvfn2\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.062889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.254964 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.272821 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371055 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371165 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371229 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") pod \"a90f6a78-a996-49f8-a567-d2699c737d1f\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371268 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") pod \"a90f6a78-a996-49f8-a567-d2699c737d1f\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371369 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371396 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") pod \"a90f6a78-a996-49f8-a567-d2699c737d1f\" (UID: \"a90f6a78-a996-49f8-a567-d2699c737d1f\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.371503 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") pod \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\" (UID: \"b4c714e3-2147-4f8a-97cd-2e62e0f3a955\") " Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.376257 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.377232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.378476 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9" (OuterVolumeSpecName: "kube-api-access-vqlp9") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "kube-api-access-vqlp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.385024 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8" (OuterVolumeSpecName: "kube-api-access-6tbq8") pod "a90f6a78-a996-49f8-a567-d2699c737d1f" (UID: "a90f6a78-a996-49f8-a567-d2699c737d1f"). InnerVolumeSpecName "kube-api-access-6tbq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.385455 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts" (OuterVolumeSpecName: "scripts") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.385481 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a90f6a78-a996-49f8-a567-d2699c737d1f" (UID: "a90f6a78-a996-49f8-a567-d2699c737d1f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.411466 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a90f6a78-a996-49f8-a567-d2699c737d1f" (UID: "a90f6a78-a996-49f8-a567-d2699c737d1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.413440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data" (OuterVolumeSpecName: "config-data") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.441411 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c714e3-2147-4f8a-97cd-2e62e0f3a955" (UID: "b4c714e3-2147-4f8a-97cd-2e62e0f3a955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474310 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474390 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474399 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474409 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474418 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqlp9\" (UniqueName: \"kubernetes.io/projected/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-kube-api-access-vqlp9\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474428 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474438 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c714e3-2147-4f8a-97cd-2e62e0f3a955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474448 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tbq8\" (UniqueName: \"kubernetes.io/projected/a90f6a78-a996-49f8-a567-d2699c737d1f-kube-api-access-6tbq8\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.474456 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a90f6a78-a996-49f8-a567-d2699c737d1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.539066 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rt7v2" event={"ID":"a90f6a78-a996-49f8-a567-d2699c737d1f","Type":"ContainerDied","Data":"0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.539120 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0da3c4e319786c371f66bda236269e4c334ffccdd92bab472a1f2cb2958a901e" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.539084 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rt7v2" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541180 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerStarted","Data":"43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerStarted","Data":"ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerStarted","Data":"830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.541312 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.542607 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-89chj" event={"ID":"b4c714e3-2147-4f8a-97cd-2e62e0f3a955","Type":"ContainerDied","Data":"e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b"} Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.542621 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-89chj" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.542641 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0581ee8873f5edaa39aaa7f2d0b784b8439d87e92a780f77d43f409a915965b" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.597569 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5db486d6f7-9jq9s" podStartSLOduration=11.597549992 podStartE2EDuration="11.597549992s" podCreationTimestamp="2026-01-30 21:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:25.56088074 +0000 UTC m=+1444.306703389" watchObservedRunningTime="2026-01-30 21:38:25.597549992 +0000 UTC m=+1444.343372641" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.633471 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.753414 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-55986d9fc9-zjsx4"] Jan 30 21:38:25 crc kubenswrapper[4751]: E0130 21:38:25.753905 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerName="keystone-bootstrap" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.753918 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerName="keystone-bootstrap" Jan 30 21:38:25 crc kubenswrapper[4751]: E0130 21:38:25.753947 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerName="barbican-db-sync" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.753953 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerName="barbican-db-sync" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.754145 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" containerName="keystone-bootstrap" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.754167 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" containerName="barbican-db-sync" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.754890 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.773025 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.773409 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zjrt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.773608 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.774290 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.774614 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.774853 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.787154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55986d9fc9-zjsx4"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.880243 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.887934 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-public-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.888915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-fernet-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.889449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-internal-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.889606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-config-data\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.889731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5wz\" (UniqueName: \"kubernetes.io/projected/aab674da-e1ff-4881-9432-fad6b85111f2-kube-api-access-qj5wz\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.891248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-combined-ca-bundle\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.891417 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-scripts\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.891583 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-credential-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.893211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.897065 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.899819 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.900811 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rrrpx" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.937407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-fernet-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994585 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-internal-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-config-data\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994699 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5wz\" (UniqueName: \"kubernetes.io/projected/aab674da-e1ff-4881-9432-fad6b85111f2-kube-api-access-qj5wz\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994759 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994789 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-combined-ca-bundle\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-scripts\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-credential-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:25 crc kubenswrapper[4751]: I0130 21:38:25.994943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-public-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.011100 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-credential-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.018868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-scripts\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.019785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-public-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.030919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-combined-ca-bundle\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.034070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-internal-tls-certs\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.052050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-config-data\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.059149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aab674da-e1ff-4881-9432-fad6b85111f2-fernet-keys\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.078132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5wz\" (UniqueName: \"kubernetes.io/projected/aab674da-e1ff-4881-9432-fad6b85111f2-kube-api-access-qj5wz\") pod \"keystone-55986d9fc9-zjsx4\" (UID: \"aab674da-e1ff-4881-9432-fad6b85111f2\") " pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.096567 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.096864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.097056 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.097225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.097319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.102165 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.117193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.120022 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.121680 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.124817 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.126488 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.130775 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.137420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"barbican-keystone-listener-599c9789d8-7n2xt\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.137726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.164367 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204187 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.204559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.224846 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.225128 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" containerID="cri-o://e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236" gracePeriod=10 Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.228811 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.268093 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.272705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.277371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.296187 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310174 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310268 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310440 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310550 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310651 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.310726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.311479 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.315470 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56b859c9db-tvldd"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.320496 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.322074 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.335084 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fd66f57b7-5jqls"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.337133 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.340854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.344092 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.349562 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"barbican-worker-684d7cc675-gfk2w\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.354968 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.355007 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.372282 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56b859c9db-tvldd"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.374448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.390626 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd66f57b7-5jqls"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.410188 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data-custom\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76562ec1-fb40-4590-9d96-f05cafc13640-logs\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412628 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412703 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzxmt\" (UniqueName: \"kubernetes.io/projected/334843b7-3c66-42fa-8880-4337946df593-kube-api-access-zzxmt\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22h84\" (UniqueName: \"kubernetes.io/projected/76562ec1-fb40-4590-9d96-f05cafc13640-kube-api-access-22h84\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.412939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-combined-ca-bundle\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413148 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-combined-ca-bundle\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413223 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334843b7-3c66-42fa-8880-4337946df593-logs\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data-custom\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.413738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.414723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.415740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.418044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.418757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.419275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.420778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.425903 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.440975 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.443429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"dnsmasq-dns-848cf88cfc-j4xm6\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516428 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-combined-ca-bundle\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-combined-ca-bundle\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334843b7-3c66-42fa-8880-4337946df593-logs\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data-custom\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516666 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516791 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data-custom\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76562ec1-fb40-4590-9d96-f05cafc13640-logs\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516937 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.516969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.517028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzxmt\" (UniqueName: \"kubernetes.io/projected/334843b7-3c66-42fa-8880-4337946df593-kube-api-access-zzxmt\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.517058 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22h84\" (UniqueName: \"kubernetes.io/projected/76562ec1-fb40-4590-9d96-f05cafc13640-kube-api-access-22h84\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.523172 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76562ec1-fb40-4590-9d96-f05cafc13640-logs\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.530046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data-custom\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.530868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-combined-ca-bundle\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.532192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-config-data\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.532978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334843b7-3c66-42fa-8880-4337946df593-logs\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.539153 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data-custom\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.541974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76562ec1-fb40-4590-9d96-f05cafc13640-combined-ca-bundle\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.548723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334843b7-3c66-42fa-8880-4337946df593-config-data\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.553031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22h84\" (UniqueName: \"kubernetes.io/projected/76562ec1-fb40-4590-9d96-f05cafc13640-kube-api-access-22h84\") pod \"barbican-worker-5fd66f57b7-5jqls\" (UID: \"76562ec1-fb40-4590-9d96-f05cafc13640\") " pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.574921 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzxmt\" (UniqueName: \"kubernetes.io/projected/334843b7-3c66-42fa-8880-4337946df593-kube-api-access-zzxmt\") pod \"barbican-keystone-listener-56b859c9db-tvldd\" (UID: \"334843b7-3c66-42fa-8880-4337946df593\") " pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.608264 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerID="e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236" exitCode=0 Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.608418 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerDied","Data":"e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236"} Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.610788 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerStarted","Data":"5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23"} Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.610851 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerStarted","Data":"dd3c757afd0458b9ccac3e6359d964949a2b5e06b72b283eb20687517536ba8e"} Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619429 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.619899 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.625574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.626312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.632391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.649532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"barbican-api-577c4d4496-28rjx\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.699666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.733728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.742740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd66f57b7-5jqls" Jan 30 21:38:26 crc kubenswrapper[4751]: I0130 21:38:26.762918 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.082174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55986d9fc9-zjsx4"] Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.468914 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:27 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:27 crc kubenswrapper[4751]: > Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.575959 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.607487 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653622 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653667 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.653871 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") pod \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\" (UID: \"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de\") " Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.747816 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.749542 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q" (OuterVolumeSpecName: "kube-api-access-g798q") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "kube-api-access-g798q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.806500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.825853 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.847741 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.847774 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g798q\" (UniqueName: \"kubernetes.io/projected/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-kube-api-access-g798q\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.847787 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.897912 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" event={"ID":"5f34e0f2-89e9-44d7-8c8e-3ee12728b7de","Type":"ContainerDied","Data":"076d7d4d931c2808d030541aca7f47cf2bc19d9d8238d02afb1c64b4babe3a92"} Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.897960 4751 scope.go:117] "RemoveContainer" containerID="e4a0adf6c315c14550be616cb96a9ec23a37d406c363546310b316d3fbfb0236" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.898093 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-trt9f" Jan 30 21:38:27 crc kubenswrapper[4751]: I0130 21:38:27.954587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55986d9fc9-zjsx4" event={"ID":"aab674da-e1ff-4881-9432-fad6b85111f2","Type":"ContainerStarted","Data":"5951c7e9f19be42ae6b12e24068ae94da55d168e37b756f99490f10534e3236f"} Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.013360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config" (OuterVolumeSpecName: "config") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.038884 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5486cc9958-dvfn2" podStartSLOduration=4.038867837 podStartE2EDuration="4.038867837s" podCreationTimestamp="2026-01-30 21:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:28.030457132 +0000 UTC m=+1446.776279781" watchObservedRunningTime="2026-01-30 21:38:28.038867837 +0000 UTC m=+1446.784690486" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.053656 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.096478 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:28 crc kubenswrapper[4751]: W0130 21:38:28.121582 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8978647_a7c1_4e25_b9c9_114227c06b39.slice/crio-2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea WatchSource:0}: Error finding container 2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea: Status 404 returned error can't find the container with id 2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.123354 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" (UID: "5f34e0f2-89e9-44d7-8c8e-3ee12728b7de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:28 crc kubenswrapper[4751]: W0130 21:38:28.136624 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod334843b7_3c66_42fa_8880_4337946df593.slice/crio-06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e WatchSource:0}: Error finding container 06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e: Status 404 returned error can't find the container with id 06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.156564 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.156593 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280566 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd66f57b7-5jqls"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280597 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerStarted","Data":"744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f"} Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280623 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280634 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280643 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280654 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56b859c9db-tvldd"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.280664 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.348387 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.349304 4751 scope.go:117] "RemoveContainer" containerID="a53508f839991fbdcbf4f267b810010ea1fe74ec4a4881adda9c8e4964af9678" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.403889 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-trt9f"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.438459 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bcbb59b46-2xhmj"] Jan 30 21:38:28 crc kubenswrapper[4751]: E0130 21:38:28.439071 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.439091 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" Jan 30 21:38:28 crc kubenswrapper[4751]: E0130 21:38:28.439106 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="init" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.439111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="init" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.439334 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" containerName="dnsmasq-dns" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.446791 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.461512 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bcbb59b46-2xhmj"] Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566267 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8gs\" (UniqueName: \"kubernetes.io/projected/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-kube-api-access-2x8gs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-combined-ca-bundle\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-public-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-logs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-internal-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-config-data\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.566832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-scripts\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-combined-ca-bundle\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-public-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-logs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668525 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-internal-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668577 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-config-data\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-scripts\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.668687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x8gs\" (UniqueName: \"kubernetes.io/projected/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-kube-api-access-2x8gs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.671063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-logs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.678967 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-combined-ca-bundle\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.683871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-internal-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.692641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-public-tls-certs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.700621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-scripts\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.718947 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-config-data\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.719015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x8gs\" (UniqueName: \"kubernetes.io/projected/0cb6a4c8-d098-48b5-8ffe-ff46a64bc377-kube-api-access-2x8gs\") pod \"placement-6bcbb59b46-2xhmj\" (UID: \"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377\") " pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:28 crc kubenswrapper[4751]: I0130 21:38:28.791038 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.044795 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerStarted","Data":"bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.052257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerStarted","Data":"f69331b88c26a882279ae1095efcc5409673b2a358ee1f734b4039924d077292"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.060753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerStarted","Data":"4c453d695d623dcfb13b6ad95951d2683a6fbb29c875d8bbb6a2715ff24c2c26"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.091584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55986d9fc9-zjsx4" event={"ID":"aab674da-e1ff-4881-9432-fad6b85111f2","Type":"ContainerStarted","Data":"26cc386e95d056a2e4ef9c20a02bf5ed78ff3b76c43d45c8275281e6f8bcc1c1"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.092414 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.115651 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-bq6lp" podStartSLOduration=5.06772528 podStartE2EDuration="52.115631177s" podCreationTimestamp="2026-01-30 21:37:37 +0000 UTC" firstStartedPulling="2026-01-30 21:37:39.553666235 +0000 UTC m=+1398.299488884" lastFinishedPulling="2026-01-30 21:38:26.601572132 +0000 UTC m=+1445.347394781" observedRunningTime="2026-01-30 21:38:29.069697716 +0000 UTC m=+1447.815520375" watchObservedRunningTime="2026-01-30 21:38:29.115631177 +0000 UTC m=+1447.861453826" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.116277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerStarted","Data":"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.116388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerStarted","Data":"90765e22c210e8d4dca2167b620180485ee5c8cdff299ab3f9aa70131e4301fe"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.120223 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-55986d9fc9-zjsx4" podStartSLOduration=4.120209509 podStartE2EDuration="4.120209509s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:29.117219759 +0000 UTC m=+1447.863042408" watchObservedRunningTime="2026-01-30 21:38:29.120209509 +0000 UTC m=+1447.866032158" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.120716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd66f57b7-5jqls" event={"ID":"76562ec1-fb40-4590-9d96-f05cafc13640","Type":"ContainerStarted","Data":"3d06ccb2beb029cd8f729d6688245ef9156c53c373a572d41f8f38e1c44fcb1e"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.128920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" event={"ID":"334843b7-3c66-42fa-8880-4337946df593","Type":"ContainerStarted","Data":"06ec7e7cf9e09728dc67ef2332efb7da9f793376db2500797f5053f3ce954b8e"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.145743 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" exitCode=0 Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.147069 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerDied","Data":"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.147115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerStarted","Data":"2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea"} Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.350785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bcbb59b46-2xhmj"] Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.666426 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b7f497ffb-fkntp"] Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.669596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.673052 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.673113 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.697765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b7f497ffb-fkntp"] Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgk5\" (UniqueName: \"kubernetes.io/projected/1a2838e6-7563-4e97-893d-58d8619b780b-kube-api-access-chgk5\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-internal-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740666 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2838e6-7563-4e97-893d-58d8619b780b-logs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data-custom\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-public-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.740845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-combined-ca-bundle\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data-custom\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850741 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-public-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-combined-ca-bundle\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.850947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851085 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chgk5\" (UniqueName: \"kubernetes.io/projected/1a2838e6-7563-4e97-893d-58d8619b780b-kube-api-access-chgk5\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-internal-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2838e6-7563-4e97-893d-58d8619b780b-logs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.851983 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2838e6-7563-4e97-893d-58d8619b780b-logs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.856265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.858104 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-config-data-custom\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.859123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-combined-ca-bundle\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.861994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-internal-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.862488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2838e6-7563-4e97-893d-58d8619b780b-public-tls-certs\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.877136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chgk5\" (UniqueName: \"kubernetes.io/projected/1a2838e6-7563-4e97-893d-58d8619b780b-kube-api-access-chgk5\") pod \"barbican-api-b7f497ffb-fkntp\" (UID: \"1a2838e6-7563-4e97-893d-58d8619b780b\") " pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:29 crc kubenswrapper[4751]: I0130 21:38:29.988781 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.008669 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f34e0f2-89e9-44d7-8c8e-3ee12728b7de" path="/var/lib/kubelet/pods/5f34e0f2-89e9-44d7-8c8e-3ee12728b7de/volumes" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.170452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerStarted","Data":"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.172654 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.172685 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.176394 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bcbb59b46-2xhmj" event={"ID":"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377","Type":"ContainerStarted","Data":"bd6c8a9d0c83efa447a847c980995760c5179489f43d2cf67a1740b8cb7d57fe"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.176420 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bcbb59b46-2xhmj" event={"ID":"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377","Type":"ContainerStarted","Data":"9274e10e8e5704f97aeac961429028f5763307728946938222ba53a21048f59c"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.188116 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-577c4d4496-28rjx" podStartSLOduration=4.18808896 podStartE2EDuration="4.18808896s" podCreationTimestamp="2026-01-30 21:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:30.185259894 +0000 UTC m=+1448.931082543" watchObservedRunningTime="2026-01-30 21:38:30.18808896 +0000 UTC m=+1448.933911609" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.189908 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerStarted","Data":"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446"} Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.189973 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:30 crc kubenswrapper[4751]: I0130 21:38:30.214758 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" podStartSLOduration=5.214741504 podStartE2EDuration="5.214741504s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:30.208384534 +0000 UTC m=+1448.954207183" watchObservedRunningTime="2026-01-30 21:38:30.214741504 +0000 UTC m=+1448.960564153" Jan 30 21:38:31 crc kubenswrapper[4751]: I0130 21:38:31.809697 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b7f497ffb-fkntp"] Jan 30 21:38:31 crc kubenswrapper[4751]: W0130 21:38:31.814716 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2838e6_7563_4e97_893d_58d8619b780b.slice/crio-3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56 WatchSource:0}: Error finding container 3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56: Status 404 returned error can't find the container with id 3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56 Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.230783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bcbb59b46-2xhmj" event={"ID":"0cb6a4c8-d098-48b5-8ffe-ff46a64bc377","Type":"ContainerStarted","Data":"1f26f6a6e1931ca0092bc20d95f468f5ae4990cb949e330792df3d8104286def"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.231057 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.231313 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.232176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7f497ffb-fkntp" event={"ID":"1a2838e6-7563-4e97-893d-58d8619b780b","Type":"ContainerStarted","Data":"3a95f87bf212d1d853c04491e7a74f92e47d4df580b71362f2463bb0478a9f56"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.235049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd66f57b7-5jqls" event={"ID":"76562ec1-fb40-4590-9d96-f05cafc13640","Type":"ContainerStarted","Data":"90029ee8bdf14aa08508a916b80fe7db8b07879c4b6c630021e9d41a67d33f3d"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.243596 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" event={"ID":"334843b7-3c66-42fa-8880-4337946df593","Type":"ContainerStarted","Data":"7e3e733bab2217b3c084197ff1c8d01b30aa851c87ac55908e0a6c87ddc58079"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.254850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerStarted","Data":"0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.259858 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerStarted","Data":"cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535"} Jan 30 21:38:32 crc kubenswrapper[4751]: I0130 21:38:32.262765 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bcbb59b46-2xhmj" podStartSLOduration=4.262746025 podStartE2EDuration="4.262746025s" podCreationTimestamp="2026-01-30 21:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:32.25319317 +0000 UTC m=+1450.999015819" watchObservedRunningTime="2026-01-30 21:38:32.262746025 +0000 UTC m=+1451.008568674" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.272356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7f497ffb-fkntp" event={"ID":"1a2838e6-7563-4e97-893d-58d8619b780b","Type":"ContainerStarted","Data":"19b85bac9b784cc771b4cef45d6b17580f604bf4aa31f641960ca4c684c0b355"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.274881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd66f57b7-5jqls" event={"ID":"76562ec1-fb40-4590-9d96-f05cafc13640","Type":"ContainerStarted","Data":"ba8cfedf0c994501fb1ef5f7240d8c1603e547c5b339c8de5470c13c61b42fbc"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.277318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" event={"ID":"334843b7-3c66-42fa-8880-4337946df593","Type":"ContainerStarted","Data":"0b2955b3e35fa944ef58f65304e69ed3fba383de470d48a45c715701012b5a7d"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.281216 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerStarted","Data":"959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.284558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerStarted","Data":"52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f"} Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.298763 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fd66f57b7-5jqls" podStartSLOduration=4.21649005 podStartE2EDuration="7.298744932s" podCreationTimestamp="2026-01-30 21:38:26 +0000 UTC" firstStartedPulling="2026-01-30 21:38:28.123832523 +0000 UTC m=+1446.869655172" lastFinishedPulling="2026-01-30 21:38:31.206087405 +0000 UTC m=+1449.951910054" observedRunningTime="2026-01-30 21:38:33.292361371 +0000 UTC m=+1452.038184020" watchObservedRunningTime="2026-01-30 21:38:33.298744932 +0000 UTC m=+1452.044567581" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.311617 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56b859c9db-tvldd" podStartSLOduration=4.249224338 podStartE2EDuration="7.311588747s" podCreationTimestamp="2026-01-30 21:38:26 +0000 UTC" firstStartedPulling="2026-01-30 21:38:28.144704082 +0000 UTC m=+1446.890526731" lastFinishedPulling="2026-01-30 21:38:31.207068481 +0000 UTC m=+1449.952891140" observedRunningTime="2026-01-30 21:38:33.306697835 +0000 UTC m=+1452.052520484" watchObservedRunningTime="2026-01-30 21:38:33.311588747 +0000 UTC m=+1452.057411436" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.346905 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.350015 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" podStartSLOduration=4.946301883 podStartE2EDuration="8.349992844s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="2026-01-30 21:38:27.786990741 +0000 UTC m=+1446.532813390" lastFinishedPulling="2026-01-30 21:38:31.190681702 +0000 UTC m=+1449.936504351" observedRunningTime="2026-01-30 21:38:33.326843415 +0000 UTC m=+1452.072666064" watchObservedRunningTime="2026-01-30 21:38:33.349992844 +0000 UTC m=+1452.095815483" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.365678 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-684d7cc675-gfk2w" podStartSLOduration=4.912632712 podStartE2EDuration="8.365661855s" podCreationTimestamp="2026-01-30 21:38:25 +0000 UTC" firstStartedPulling="2026-01-30 21:38:27.755887978 +0000 UTC m=+1446.501710627" lastFinishedPulling="2026-01-30 21:38:31.208917121 +0000 UTC m=+1449.954739770" observedRunningTime="2026-01-30 21:38:33.36063963 +0000 UTC m=+1452.106462299" watchObservedRunningTime="2026-01-30 21:38:33.365661855 +0000 UTC m=+1452.111484504" Jan 30 21:38:33 crc kubenswrapper[4751]: I0130 21:38:33.387139 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:34 crc kubenswrapper[4751]: I0130 21:38:34.300493 4751 generic.go:334] "Generic (PLEG): container finished" podID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerID="5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97" exitCode=0 Jan 30 21:38:34 crc kubenswrapper[4751]: I0130 21:38:34.300582 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerDied","Data":"5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97"} Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.311828 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-684d7cc675-gfk2w" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" containerID="cri-o://52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f" gracePeriod=30 Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.312058 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-684d7cc675-gfk2w" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" containerID="cri-o://cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535" gracePeriod=30 Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.312241 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" containerID="cri-o://0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0" gracePeriod=30 Jan 30 21:38:35 crc kubenswrapper[4751]: I0130 21:38:35.312288 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" containerID="cri-o://959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25" gracePeriod=30 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.354621 4751 generic.go:334] "Generic (PLEG): container finished" podID="7a46895c-c496-4a55-b580-37e5118d467e" containerID="959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25" exitCode=0 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.354920 4751 generic.go:334] "Generic (PLEG): container finished" podID="7a46895c-c496-4a55-b580-37e5118d467e" containerID="0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0" exitCode=143 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.354986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerDied","Data":"959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.355012 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerDied","Data":"0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.362042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-npwgd" event={"ID":"1051dd3c-5d30-47f1-8162-3a3e9d5ee271","Type":"ContainerDied","Data":"61ca5d5ef115983d440cf3f223c2366b80c380ae13e04f479bd30ca5a18ae1d4"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.362308 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ca5d5ef115983d440cf3f223c2366b80c380ae13e04f479bd30ca5a18ae1d4" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374394 4751 generic.go:334] "Generic (PLEG): container finished" podID="97e3362c-da50-4989-abe0-9dde0694c635" containerID="52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f" exitCode=0 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374427 4751 generic.go:334] "Generic (PLEG): container finished" podID="97e3362c-da50-4989-abe0-9dde0694c635" containerID="cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535" exitCode=143 Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerDied","Data":"52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.374479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerDied","Data":"cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535"} Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.420389 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.518841 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") pod \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.518985 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") pod \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.519036 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") pod \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\" (UID: \"1051dd3c-5d30-47f1-8162-3a3e9d5ee271\") " Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.531591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb" (OuterVolumeSpecName: "kube-api-access-vxctb") pod "1051dd3c-5d30-47f1-8162-3a3e9d5ee271" (UID: "1051dd3c-5d30-47f1-8162-3a3e9d5ee271"). InnerVolumeSpecName "kube-api-access-vxctb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.575581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1051dd3c-5d30-47f1-8162-3a3e9d5ee271" (UID: "1051dd3c-5d30-47f1-8162-3a3e9d5ee271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.620866 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.620904 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxctb\" (UniqueName: \"kubernetes.io/projected/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-kube-api-access-vxctb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.661236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data" (OuterVolumeSpecName: "config-data") pod "1051dd3c-5d30-47f1-8162-3a3e9d5ee271" (UID: "1051dd3c-5d30-47f1-8162-3a3e9d5ee271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.702503 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.724623 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1051dd3c-5d30-47f1-8162-3a3e9d5ee271-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.767159 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:38:36 crc kubenswrapper[4751]: I0130 21:38:36.767413 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" containerID="cri-o://d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3" gracePeriod=10 Jan 30 21:38:37 crc kubenswrapper[4751]: I0130 21:38:37.388236 4751 generic.go:334] "Generic (PLEG): container finished" podID="714bda18-396a-4c61-b32c-28c97f9212c7" containerID="d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3" exitCode=0 Jan 30 21:38:37 crc kubenswrapper[4751]: I0130 21:38:37.388291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerDied","Data":"d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3"} Jan 30 21:38:37 crc kubenswrapper[4751]: I0130 21:38:37.388636 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-npwgd" Jan 30 21:38:38 crc kubenswrapper[4751]: I0130 21:38:38.688988 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:38 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:38 crc kubenswrapper[4751]: > Jan 30 21:38:39 crc kubenswrapper[4751]: I0130 21:38:39.029930 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Jan 30 21:38:39 crc kubenswrapper[4751]: I0130 21:38:39.430501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerDied","Data":"bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397"} Jan 30 21:38:39 crc kubenswrapper[4751]: I0130 21:38:39.430442 4751 generic.go:334] "Generic (PLEG): container finished" podID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerID="bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397" exitCode=0 Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.174238 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.280949 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs" (OuterVolumeSpecName: "logs") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.281017 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.281130 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") pod \"7a46895c-c496-4a55-b580-37e5118d467e\" (UID: \"7a46895c-c496-4a55-b580-37e5118d467e\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.281742 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a46895c-c496-4a55-b580-37e5118d467e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.296615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq" (OuterVolumeSpecName: "kube-api-access-sg9wq") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "kube-api-access-sg9wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.296763 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.331066 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.383010 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.383045 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.383054 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9wq\" (UniqueName: \"kubernetes.io/projected/7a46895c-c496-4a55-b580-37e5118d467e-kube-api-access-sg9wq\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.413013 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data" (OuterVolumeSpecName: "config-data") pod "7a46895c-c496-4a55-b580-37e5118d467e" (UID: "7a46895c-c496-4a55-b580-37e5118d467e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.467657 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.468633 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-599c9789d8-7n2xt" event={"ID":"7a46895c-c496-4a55-b580-37e5118d467e","Type":"ContainerDied","Data":"f69331b88c26a882279ae1095efcc5409673b2a358ee1f734b4039924d077292"} Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.468748 4751 scope.go:117] "RemoveContainer" containerID="959811e3ff13503eafe6cff50a8c03fc84e50fb6ff23ae19bd94fccb9c4b2d25" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.485178 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a46895c-c496-4a55-b580-37e5118d467e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.564911 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.565101 4751 scope.go:117] "RemoveContainer" containerID="0bb2cc6dca6986bdb5a77526a0076115cd791dc2be166bdef0afca55706c64d0" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.596356 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.607748 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-599c9789d8-7n2xt"] Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.669863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.683513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.706647 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.792809 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.792916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793180 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793245 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.793279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") pod \"714bda18-396a-4c61-b32c-28c97f9212c7\" (UID: \"714bda18-396a-4c61-b32c-28c97f9212c7\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.800357 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s" (OuterVolumeSpecName: "kube-api-access-pc98s") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "kube-api-access-pc98s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: E0130 21:38:40.836883 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.876184 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.879871 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.885543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.893268 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.896832 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.896960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897004 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897212 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") pod \"97e3362c-da50-4989-abe0-9dde0694c635\" (UID: \"97e3362c-da50-4989-abe0-9dde0694c635\") " Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.897499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs" (OuterVolumeSpecName: "logs") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898116 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898131 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898143 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898151 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc98s\" (UniqueName: \"kubernetes.io/projected/714bda18-396a-4c61-b32c-28c97f9212c7-kube-api-access-pc98s\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898160 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97e3362c-da50-4989-abe0-9dde0694c635-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.898168 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.909553 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.911628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5" (OuterVolumeSpecName: "kube-api-access-88fh5") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "kube-api-access-88fh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.913364 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config" (OuterVolumeSpecName: "config") pod "714bda18-396a-4c61-b32c-28c97f9212c7" (UID: "714bda18-396a-4c61-b32c-28c97f9212c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.927065 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.933344 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:38:40 crc kubenswrapper[4751]: I0130 21:38:40.997140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data" (OuterVolumeSpecName: "config-data") pod "97e3362c-da50-4989-abe0-9dde0694c635" (UID: "97e3362c-da50-4989-abe0-9dde0694c635"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002533 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002662 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002678 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714bda18-396a-4c61-b32c-28c97f9212c7-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002698 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97e3362c-da50-4989-abe0-9dde0694c635-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.002713 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88fh5\" (UniqueName: \"kubernetes.io/projected/97e3362c-da50-4989-abe0-9dde0694c635-kube-api-access-88fh5\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105405 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105715 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105748 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105818 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.105981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") pod \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\" (UID: \"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24\") " Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.106719 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.113615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr" (OuterVolumeSpecName: "kube-api-access-4xqxr") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "kube-api-access-4xqxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.114595 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.117478 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts" (OuterVolumeSpecName: "scripts") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.143294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.176743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data" (OuterVolumeSpecName: "config-data") pod "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" (UID: "564f3d8f-4b9f-4fe2-9464-baa31d6b7d24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210490 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210737 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xqxr\" (UniqueName: \"kubernetes.io/projected/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-kube-api-access-4xqxr\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210796 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210860 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.210917 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.485602 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bq6lp" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.485586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bq6lp" event={"ID":"564f3d8f-4b9f-4fe2-9464-baa31d6b7d24","Type":"ContainerDied","Data":"d48985825eedc61af18140d62898c9c9236f51e33569add314f3a9440bbd00d5"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.486293 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48985825eedc61af18140d62898c9c9236f51e33569add314f3a9440bbd00d5" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.492625 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-684d7cc675-gfk2w" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.492638 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-684d7cc675-gfk2w" event={"ID":"97e3362c-da50-4989-abe0-9dde0694c635","Type":"ContainerDied","Data":"4c453d695d623dcfb13b6ad95951d2683a6fbb29c875d8bbb6a2715ff24c2c26"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.492705 4751 scope.go:117] "RemoveContainer" containerID="52c38d4fff52ee815c69164766a6fb7e40fd7fe7b4e65c636741695adcfc586f" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.504778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerStarted","Data":"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.504972 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" containerID="cri-o://e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" gracePeriod=30 Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.505048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.505760 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" containerID="cri-o://cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" gracePeriod=30 Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.505820 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" containerID="cri-o://3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" gracePeriod=30 Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.527153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" event={"ID":"714bda18-396a-4c61-b32c-28c97f9212c7","Type":"ContainerDied","Data":"d95ab28a617fe672cc0a279ffe8dbf4ffcdc0187184b974eff4f150e1720495f"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.528171 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5x59j" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.531232 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b7f497ffb-fkntp" event={"ID":"1a2838e6-7563-4e97-893d-58d8619b780b","Type":"ContainerStarted","Data":"f80d2951cd9e934acc6b9716e4aedcf612bbf8e8106ef290bbc72dadd9b128a4"} Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.531304 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.531578 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.583577 4751 scope.go:117] "RemoveContainer" containerID="cdea15480a78958320e248ecfbbebe1a3f0521e65da5ed371170e13162407535" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.618289 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b7f497ffb-fkntp" podStartSLOduration=12.618262793 podStartE2EDuration="12.618262793s" podCreationTimestamp="2026-01-30 21:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:41.566104626 +0000 UTC m=+1460.311927275" watchObservedRunningTime="2026-01-30 21:38:41.618262793 +0000 UTC m=+1460.364085442" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.620155 4751 scope.go:117] "RemoveContainer" containerID="d75cd44bc174bd1c8fb960d6b48079304f29a447af1f96a3c4feb1e101ec22b3" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.653776 4751 scope.go:117] "RemoveContainer" containerID="ef63bfb279b0f69282ea04ec8633731532edcb149df478089ae1d0918490a1d0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.657742 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.669594 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-684d7cc675-gfk2w"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.690898 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.753926 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5x59j"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801072 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801556 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801569 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801583 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="init" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801589 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="init" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801599 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerName="heat-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801606 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerName="heat-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801617 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801623 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801631 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801637 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801657 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801663 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801678 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerName="cinder-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801684 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerName="cinder-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: E0130 21:38:41.801704 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801710 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801923 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801940 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801956 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" containerName="heat-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801963 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a46895c-c496-4a55-b580-37e5118d467e" containerName="barbican-keystone-listener-log" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801975 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e3362c-da50-4989-abe0-9dde0694c635" containerName="barbican-worker" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801986 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" containerName="dnsmasq-dns" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.801999 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" containerName="cinder-db-sync" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.803178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.807741 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-s756f" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.808582 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.808710 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.810818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.826277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.868575 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.870490 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.914932 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.960721 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961001 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961116 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:41 crc kubenswrapper[4751]: I0130 21:38:41.961256 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.007703 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="714bda18-396a-4c61-b32c-28c97f9212c7" path="/var/lib/kubelet/pods/714bda18-396a-4c61-b32c-28c97f9212c7/volumes" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.008530 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a46895c-c496-4a55-b580-37e5118d467e" path="/var/lib/kubelet/pods/7a46895c-c496-4a55-b580-37e5118d467e/volumes" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.009129 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e3362c-da50-4989-abe0-9dde0694c635" path="/var/lib/kubelet/pods/97e3362c-da50-4989-abe0-9dde0694c635/volumes" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063690 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063833 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063871 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063966 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.063991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064009 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.064995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.074756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.076102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.077123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.086780 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.090111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.093615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.094912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.099370 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.118810 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.140637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167625 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.167909 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.169036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.172926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.184420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.185551 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.185706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.191293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"dnsmasq-dns-6578955fd5-hb44m\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.229140 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269588 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269673 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269747 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.269809 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.374153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.376853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.376980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377045 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.377374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.378512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.379371 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.381829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.382188 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.388908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.389874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.400509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"cinder-api-0\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.556903 4751 generic.go:334] "Generic (PLEG): container finished" podID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" exitCode=0 Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.556932 4751 generic.go:334] "Generic (PLEG): container finished" podID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" exitCode=2 Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.556982 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e"} Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.557008 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0"} Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.588757 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.777345 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.789171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:38:42 crc kubenswrapper[4751]: I0130 21:38:42.917773 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.076801 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.077772 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" containerID="cri-o://ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14" gracePeriod=30 Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.079742 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" containerID="cri-o://43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec" gracePeriod=30 Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.102651 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9696/\": EOF" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.111255 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6989c95c85-6thsl"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.114119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.139424 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6989c95c85-6thsl"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.155165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-public-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-internal-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-httpd-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-ovndb-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-combined-ca-bundle\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.205939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8gbm\" (UniqueName: \"kubernetes.io/projected/68910b8d-2ec3-4b7c-956c-e3d3518042cf-kube-api-access-n8gbm\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.309992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-combined-ca-bundle\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8gbm\" (UniqueName: \"kubernetes.io/projected/68910b8d-2ec3-4b7c-956c-e3d3518042cf-kube-api-access-n8gbm\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-public-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-internal-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-httpd-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.310811 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-ovndb-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.325469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-httpd-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.330703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-config\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.339306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8gbm\" (UniqueName: \"kubernetes.io/projected/68910b8d-2ec3-4b7c-956c-e3d3518042cf-kube-api-access-n8gbm\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.343773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-internal-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.345905 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-public-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.348044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-ovndb-tls-certs\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.348495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68910b8d-2ec3-4b7c-956c-e3d3518042cf-combined-ca-bundle\") pod \"neutron-6989c95c85-6thsl\" (UID: \"68910b8d-2ec3-4b7c-956c-e3d3518042cf\") " pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.420614 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.626265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerStarted","Data":"4dcb9afa9312739842512264d7e9318586580a0008020bfa37918a74b0c057c7"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.693284 4751 generic.go:334] "Generic (PLEG): container finished" podID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerID="43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec" exitCode=0 Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.693399 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerDied","Data":"43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.706364 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerStarted","Data":"e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.706402 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerStarted","Data":"d4ad9ad89c73105ea7a484e1db33eb7c6d8564b633625c6640e82ad596737a10"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.739633 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.740662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerStarted","Data":"443cd982273ccdaa55a785fc0ffbd0bf36ddc8ddfcc7c39c30424ccabdcf775b"} Jan 30 21:38:43 crc kubenswrapper[4751]: I0130 21:38:43.968696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.387182 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6989c95c85-6thsl"] Jan 30 21:38:44 crc kubenswrapper[4751]: W0130 21:38:44.421576 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68910b8d_2ec3_4b7c_956c_e3d3518042cf.slice/crio-51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0 WatchSource:0}: Error finding container 51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0: Status 404 returned error can't find the container with id 51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0 Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.471897 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.767785 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerStarted","Data":"bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.770434 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerStarted","Data":"b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.775778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6989c95c85-6thsl" event={"ID":"68910b8d-2ec3-4b7c-956c-e3d3518042cf","Type":"ContainerStarted","Data":"3efd88f9d9e3952d7fe52410ecacbd8f777f8f273d5d9e516b1db1ff8cf8f00a"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.775822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6989c95c85-6thsl" event={"ID":"68910b8d-2ec3-4b7c-956c-e3d3518042cf","Type":"ContainerStarted","Data":"51f1da2c45874461212f0a926413e7c7a50ba90a39409e74c3824dbcadb6e7a0"} Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.777577 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerID="e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c" exitCode=0 Jan 30 21:38:44 crc kubenswrapper[4751]: I0130 21:38:44.778503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerDied","Data":"e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.086108 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5db486d6f7-9jq9s" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.202:9696/\": dial tcp 10.217.0.202:9696: connect: connection refused" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.645505 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.805267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerStarted","Data":"d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.806039 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.811634 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerStarted","Data":"d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817148 4751 generic.go:334] "Generic (PLEG): container finished" podID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" exitCode=0 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817379 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817481 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36866d1c-b1a0-4d3e-a87f-f5901b053bb5","Type":"ContainerDied","Data":"f50cd608a1a18449e64efb07caa3e3fd54b436a099efe0495f393f4382e9ab10"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817549 4751 scope.go:117] "RemoveContainer" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.817829 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.829140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.830693 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.830831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.830958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.831069 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.831223 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.831377 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") pod \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\" (UID: \"36866d1c-b1a0-4d3e-a87f-f5901b053bb5\") " Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.833025 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.833280 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.840186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts" (OuterVolumeSpecName: "scripts") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.847637 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" containerID="cri-o://b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2" gracePeriod=30 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.848022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerStarted","Data":"40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.848218 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.848371 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" containerID="cri-o://40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31" gracePeriod=30 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.853384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w" (OuterVolumeSpecName: "kube-api-access-6x98w") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "kube-api-access-6x98w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.855504 4751 generic.go:334] "Generic (PLEG): container finished" podID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerID="ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14" exitCode=0 Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.855554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerDied","Data":"ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.857665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6989c95c85-6thsl" event={"ID":"68910b8d-2ec3-4b7c-956c-e3d3518042cf","Type":"ContainerStarted","Data":"2f1eee4e6896c5696d579c8440f31cec0c40baaefac5c1c58acf0760e5a65cae"} Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.858083 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.867590 4751 scope.go:117] "RemoveContainer" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.878665 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podStartSLOduration=4.878646009 podStartE2EDuration="4.878646009s" podCreationTimestamp="2026-01-30 21:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:45.826528213 +0000 UTC m=+1464.572350862" watchObservedRunningTime="2026-01-30 21:38:45.878646009 +0000 UTC m=+1464.624468658" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.884653 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.893683 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.869719397 podStartE2EDuration="4.893661171s" podCreationTimestamp="2026-01-30 21:38:41 +0000 UTC" firstStartedPulling="2026-01-30 21:38:42.792287277 +0000 UTC m=+1461.538109926" lastFinishedPulling="2026-01-30 21:38:43.816229051 +0000 UTC m=+1462.562051700" observedRunningTime="2026-01-30 21:38:45.849144378 +0000 UTC m=+1464.594967037" watchObservedRunningTime="2026-01-30 21:38:45.893661171 +0000 UTC m=+1464.639483820" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.919934 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.926706 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.926682845 podStartE2EDuration="3.926682845s" podCreationTimestamp="2026-01-30 21:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:45.867929202 +0000 UTC m=+1464.613751841" watchObservedRunningTime="2026-01-30 21:38:45.926682845 +0000 UTC m=+1464.672505484" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.939204 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942076 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942120 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942132 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942141 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.942151 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x98w\" (UniqueName: \"kubernetes.io/projected/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-kube-api-access-6x98w\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.948282 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data" (OuterVolumeSpecName: "config-data") pod "36866d1c-b1a0-4d3e-a87f-f5901b053bb5" (UID: "36866d1c-b1a0-4d3e-a87f-f5901b053bb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.969824 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6989c95c85-6thsl" podStartSLOduration=2.96980211 podStartE2EDuration="2.96980211s" podCreationTimestamp="2026-01-30 21:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:45.89027164 +0000 UTC m=+1464.636094279" watchObservedRunningTime="2026-01-30 21:38:45.96980211 +0000 UTC m=+1464.715624759" Jan 30 21:38:45 crc kubenswrapper[4751]: I0130 21:38:45.992799 4751 scope.go:117] "RemoveContainer" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.026084 4751 scope.go:117] "RemoveContainer" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.032529 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e\": container with ID starting with cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e not found: ID does not exist" containerID="cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.032580 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e"} err="failed to get container status \"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e\": rpc error: code = NotFound desc = could not find container \"cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e\": container with ID starting with cc90a4e11e1d029eeee34f73917760869533380772dddf04e894fab5c930872e not found: ID does not exist" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.032605 4751 scope.go:117] "RemoveContainer" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.033397 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0\": container with ID starting with 3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0 not found: ID does not exist" containerID="3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.033451 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0"} err="failed to get container status \"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0\": rpc error: code = NotFound desc = could not find container \"3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0\": container with ID starting with 3927c7e8f046b132aa5530dcb2faa3362f1bfb4c014417fce75dde0984b299c0 not found: ID does not exist" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.033467 4751 scope.go:117] "RemoveContainer" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.035669 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5\": container with ID starting with e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5 not found: ID does not exist" containerID="e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.035746 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5"} err="failed to get container status \"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5\": rpc error: code = NotFound desc = could not find container \"e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5\": container with ID starting with e8f97afcbe8ddffd29b942ae042e5fce17dc8861bb8e88d2fe1fb8cc9e3afcd5 not found: ID does not exist" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.041938 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.049432 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36866d1c-b1a0-4d3e-a87f-f5901b053bb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.151996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152139 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152309 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.152444 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") pod \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\" (UID: \"5b32add6-b9f7-4e57-9dc8-ea71dbc40276\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.168338 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t" (OuterVolumeSpecName: "kube-api-access-cz77t") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "kube-api-access-cz77t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.184122 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.208811 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.214363 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.228656 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229201 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229220 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229249 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229268 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229275 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229285 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229290 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" Jan 30 21:38:46 crc kubenswrapper[4751]: E0130 21:38:46.229317 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229335 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229538 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229558 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="ceilometer-notification-agent" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229569 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="sg-core" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229576 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" containerName="proxy-httpd" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.229588 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" containerName="neutron-api" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.231709 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.234549 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.234586 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.244051 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.260833 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz77t\" (UniqueName: \"kubernetes.io/projected/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-kube-api-access-cz77t\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.261341 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.269621 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config" (OuterVolumeSpecName: "config") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.296361 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.298919 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.302483 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.352528 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5b32add6-b9f7-4e57-9dc8-ea71dbc40276" (UID: "5b32add6-b9f7-4e57-9dc8-ea71dbc40276"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.363713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364087 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364128 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364387 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364412 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364428 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364440 4751 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.364453 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5b32add6-b9f7-4e57-9dc8-ea71dbc40276-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466255 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466320 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.466923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.467001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.469522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.470390 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.471039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.471295 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.484644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"ceilometer-0\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.494114 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871355 4751 generic.go:334] "Generic (PLEG): container finished" podID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerID="40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31" exitCode=0 Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871603 4751 generic.go:334] "Generic (PLEG): container finished" podID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerID="b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2" exitCode=143 Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerDied","Data":"40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31"} Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.871656 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerDied","Data":"b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2"} Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.873032 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.874193 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db486d6f7-9jq9s" event={"ID":"5b32add6-b9f7-4e57-9dc8-ea71dbc40276","Type":"ContainerDied","Data":"830f7a58fa0eff11fb7741f803681a1e2d985e3d4ba5a29223b173fc3c4a8925"} Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.874235 4751 scope.go:117] "RemoveContainer" containerID="43f0c7937886815d9f7975ac5a567ad9805f039d1f79d20965dca1643fdccbec" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.874305 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db486d6f7-9jq9s" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.909599 4751 scope.go:117] "RemoveContainer" containerID="ffe40f5beac55335ed5a0e5ca3f2b87505ff8c1d5062b9daa9f817649c1fad14" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.958027 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.970049 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5db486d6f7-9jq9s"] Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979231 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979384 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979466 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979502 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979596 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.979883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") pod \"2e764852-fe70-4844-a2f2-53e15c45d4c1\" (UID: \"2e764852-fe70-4844-a2f2-53e15c45d4c1\") " Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.980995 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e764852-fe70-4844-a2f2-53e15c45d4c1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.986459 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs" (OuterVolumeSpecName: "logs") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.988822 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.989138 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts" (OuterVolumeSpecName: "scripts") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:46 crc kubenswrapper[4751]: I0130 21:38:46.989468 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746" (OuterVolumeSpecName: "kube-api-access-vc746") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "kube-api-access-vc746". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.025800 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.055885 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data" (OuterVolumeSpecName: "config-data") pod "2e764852-fe70-4844-a2f2-53e15c45d4c1" (UID: "2e764852-fe70-4844-a2f2-53e15c45d4c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090131 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090172 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090231 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090244 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc746\" (UniqueName: \"kubernetes.io/projected/2e764852-fe70-4844-a2f2-53e15c45d4c1-kube-api-access-vc746\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090255 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e764852-fe70-4844-a2f2-53e15c45d4c1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.090307 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e764852-fe70-4844-a2f2-53e15c45d4c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.096859 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.141826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.405865 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:47 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:47 crc kubenswrapper[4751]: > Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.906841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec"} Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.907285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"566588f45edf254c38a8bd2cb4cecfcf41053da3f96016aee3abcebf59acf4a0"} Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.910542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e764852-fe70-4844-a2f2-53e15c45d4c1","Type":"ContainerDied","Data":"4dcb9afa9312739842512264d7e9318586580a0008020bfa37918a74b0c057c7"} Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.910603 4751 scope.go:117] "RemoveContainer" containerID="40612707051f098447bcc08882c9e620dc67b4da793ad4ac978ab6016d413a31" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.910654 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.954938 4751 scope.go:117] "RemoveContainer" containerID="b5e875fe8c945ee695bdcc985187f51e259136b02351ff697df26ea620a452b2" Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.955904 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:47 crc kubenswrapper[4751]: I0130 21:38:47.964692 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.024068 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" path="/var/lib/kubelet/pods/2e764852-fe70-4844-a2f2-53e15c45d4c1/volumes" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.024800 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36866d1c-b1a0-4d3e-a87f-f5901b053bb5" path="/var/lib/kubelet/pods/36866d1c-b1a0-4d3e-a87f-f5901b053bb5/volumes" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.025685 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b32add6-b9f7-4e57-9dc8-ea71dbc40276" path="/var/lib/kubelet/pods/5b32add6-b9f7-4e57-9dc8-ea71dbc40276/volumes" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.030994 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:48 crc kubenswrapper[4751]: E0130 21:38:48.031444 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031460 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" Jan 30 21:38:48 crc kubenswrapper[4751]: E0130 21:38:48.031486 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031492 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031684 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api-log" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.031715 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e764852-fe70-4844-a2f2-53e15c45d4c1" containerName="cinder-api" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.032953 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.033037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.035810 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.036141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.036596 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.221996 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222485 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmbb\" (UniqueName: \"kubernetes.io/projected/e741273e-caa0-4a2c-9ed0-6bae195052ce-kube-api-access-hvmbb\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e741273e-caa0-4a2c-9ed0-6bae195052ce-logs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-scripts\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.222917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223206 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e741273e-caa0-4a2c-9ed0-6bae195052ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.223352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.325759 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.325953 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326046 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvmbb\" (UniqueName: \"kubernetes.io/projected/e741273e-caa0-4a2c-9ed0-6bae195052ce-kube-api-access-hvmbb\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326137 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e741273e-caa0-4a2c-9ed0-6bae195052ce-logs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-scripts\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326371 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e741273e-caa0-4a2c-9ed0-6bae195052ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.326586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e741273e-caa0-4a2c-9ed0-6bae195052ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.327866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e741273e-caa0-4a2c-9ed0-6bae195052ce-logs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.345794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-scripts\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.346350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.346366 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.347941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.349196 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.353960 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvmbb\" (UniqueName: \"kubernetes.io/projected/e741273e-caa0-4a2c-9ed0-6bae195052ce-kube-api-access-hvmbb\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.361640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e741273e-caa0-4a2c-9ed0-6bae195052ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"e741273e-caa0-4a2c-9ed0-6bae195052ce\") " pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.669260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 21:38:48 crc kubenswrapper[4751]: I0130 21:38:48.931527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302"} Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.185906 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 21:38:49 crc kubenswrapper[4751]: W0130 21:38:49.188474 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode741273e_caa0_4a2c_9ed0_6bae195052ce.slice/crio-43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04 WatchSource:0}: Error finding container 43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04: Status 404 returned error can't find the container with id 43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04 Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.954655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b"} Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.960723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e741273e-caa0-4a2c-9ed0-6bae195052ce","Type":"ContainerStarted","Data":"4ff08d62915449ad03ba02b3f44c29ebe1137cacf07638b3880f9e961517a3a3"} Jan 30 21:38:49 crc kubenswrapper[4751]: I0130 21:38:49.960756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e741273e-caa0-4a2c-9ed0-6bae195052ce","Type":"ContainerStarted","Data":"43e5e0d485fdc67c25519d34c1072eb1c974b9add5cd791930f78a2168f65c04"} Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.004093 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b7f497ffb-fkntp" Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.088346 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.093523 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" containerID="cri-o://d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" gracePeriod=30 Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.093822 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" containerID="cri-o://59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" gracePeriod=30 Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.976107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e741273e-caa0-4a2c-9ed0-6bae195052ce","Type":"ContainerStarted","Data":"5c9ed1c831469f09eeb8ebee04c233b0a02684becb46fc04a7a28aceade51e9f"} Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.976707 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.978561 4751 generic.go:334] "Generic (PLEG): container finished" podID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" exitCode=143 Jan 30 21:38:50 crc kubenswrapper[4751]: I0130 21:38:50.978591 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerDied","Data":"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d"} Jan 30 21:38:51 crc kubenswrapper[4751]: I0130 21:38:51.008169 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.008152502 podStartE2EDuration="4.008152502s" podCreationTimestamp="2026-01-30 21:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:38:50.996121709 +0000 UTC m=+1469.741944358" watchObservedRunningTime="2026-01-30 21:38:51.008152502 +0000 UTC m=+1469.753975151" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.000037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerStarted","Data":"e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f"} Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.047302 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.443408427 podStartE2EDuration="6.047278483s" podCreationTimestamp="2026-01-30 21:38:46 +0000 UTC" firstStartedPulling="2026-01-30 21:38:47.102336732 +0000 UTC m=+1465.848159391" lastFinishedPulling="2026-01-30 21:38:51.706206788 +0000 UTC m=+1470.452029447" observedRunningTime="2026-01-30 21:38:52.036664478 +0000 UTC m=+1470.782487127" watchObservedRunningTime="2026-01-30 21:38:52.047278483 +0000 UTC m=+1470.793101132" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.231633 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.323976 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.324516 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" containerID="cri-o://292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" gracePeriod=10 Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.434468 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 21:38:52 crc kubenswrapper[4751]: I0130 21:38:52.488604 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.026597 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.052642 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" exitCode=0 Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.052892 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" containerID="cri-o://bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80" gracePeriod=30 Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.053293 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.053960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerDied","Data":"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446"} Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.054001 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-j4xm6" event={"ID":"e8978647-a7c1-4e25-b9c9-114227c06b39","Type":"ContainerDied","Data":"2f99bc73fbfc7d9583563a00884a12bf270505fadf2b5daa26dca51bca6913ea"} Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.054021 4751 scope.go:117] "RemoveContainer" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.054469 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" containerID="cri-o://d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb" gracePeriod=30 Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.055006 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112369 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112414 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112479 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.112537 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") pod \"e8978647-a7c1-4e25-b9c9-114227c06b39\" (UID: \"e8978647-a7c1-4e25-b9c9-114227c06b39\") " Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.120949 4751 scope.go:117] "RemoveContainer" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.137898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn" (OuterVolumeSpecName: "kube-api-access-ddsbn") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "kube-api-access-ddsbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.217038 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddsbn\" (UniqueName: \"kubernetes.io/projected/e8978647-a7c1-4e25-b9c9-114227c06b39-kube-api-access-ddsbn\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.246591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config" (OuterVolumeSpecName: "config") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.272617 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.319987 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.320022 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.322774 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.334450 4751 scope.go:117] "RemoveContainer" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" Jan 30 21:38:53 crc kubenswrapper[4751]: E0130 21:38:53.336765 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446\": container with ID starting with 292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446 not found: ID does not exist" containerID="292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.336799 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446"} err="failed to get container status \"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446\": rpc error: code = NotFound desc = could not find container \"292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446\": container with ID starting with 292f3855d41ee7d4a77843333bb14f65b77c2740ac544185c604a1ac171d5446 not found: ID does not exist" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.336818 4751 scope.go:117] "RemoveContainer" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" Jan 30 21:38:53 crc kubenswrapper[4751]: E0130 21:38:53.337590 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1\": container with ID starting with e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1 not found: ID does not exist" containerID="e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.337649 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1"} err="failed to get container status \"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1\": rpc error: code = NotFound desc = could not find container \"e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1\": container with ID starting with e2f7dd40889c591749a184c2b14e879cac65ab57554ef2c3f648955df0d136e1 not found: ID does not exist" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.339736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.341843 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8978647-a7c1-4e25-b9c9-114227c06b39" (UID: "e8978647-a7c1-4e25-b9c9-114227c06b39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.425014 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.425068 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.425081 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8978647-a7c1-4e25-b9c9-114227c06b39-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.427879 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.432595 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:45748->10.217.0.210:9311: read: connection reset by peer" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.432607 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-577c4d4496-28rjx" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.210:9311/healthcheck\": read tcp 10.217.0.2:45758->10.217.0.210:9311: read: connection reset by peer" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.442930 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-j4xm6"] Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.874616 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:53 crc kubenswrapper[4751]: I0130 21:38:53.995800 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" path="/var/lib/kubelet/pods/e8978647-a7c1-4e25-b9c9-114227c06b39/volumes" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.036356 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.036417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.036658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.037010 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs" (OuterVolumeSpecName: "logs") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.037547 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.037751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") pod \"49d33f4c-f33a-445b-90ab-795e750ecf2a\" (UID: \"49d33f4c-f33a-445b-90ab-795e750ecf2a\") " Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.038866 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d33f4c-f33a-445b-90ab-795e750ecf2a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.041217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.042847 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b" (OuterVolumeSpecName: "kube-api-access-mtm4b") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "kube-api-access-mtm4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064581 4751 generic.go:334] "Generic (PLEG): container finished" podID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" exitCode=0 Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064633 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-577c4d4496-28rjx" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerDied","Data":"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718"} Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-577c4d4496-28rjx" event={"ID":"49d33f4c-f33a-445b-90ab-795e750ecf2a","Type":"ContainerDied","Data":"90765e22c210e8d4dca2167b620180485ee5c8cdff299ab3f9aa70131e4301fe"} Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.064726 4751 scope.go:117] "RemoveContainer" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.073400 4751 generic.go:334] "Generic (PLEG): container finished" podID="81228544-ce67-44f1-b4e0-6a218e154363" containerID="d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb" exitCode=0 Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.073801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerDied","Data":"d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb"} Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.078699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.106526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data" (OuterVolumeSpecName: "config-data") pod "49d33f4c-f33a-445b-90ab-795e750ecf2a" (UID: "49d33f4c-f33a-445b-90ab-795e750ecf2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.126749 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.126791 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141074 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141107 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141117 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtm4b\" (UniqueName: \"kubernetes.io/projected/49d33f4c-f33a-445b-90ab-795e750ecf2a-kube-api-access-mtm4b\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.141127 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d33f4c-f33a-445b-90ab-795e750ecf2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.212918 4751 scope.go:117] "RemoveContainer" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258043 4751 scope.go:117] "RemoveContainer" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" Jan 30 21:38:54 crc kubenswrapper[4751]: E0130 21:38:54.258509 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718\": container with ID starting with 59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718 not found: ID does not exist" containerID="59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258554 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718"} err="failed to get container status \"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718\": rpc error: code = NotFound desc = could not find container \"59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718\": container with ID starting with 59d12543bc5db0960672f133622c6e1a94aba99de71c925682294151943b6718 not found: ID does not exist" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258579 4751 scope.go:117] "RemoveContainer" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" Jan 30 21:38:54 crc kubenswrapper[4751]: E0130 21:38:54.258941 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d\": container with ID starting with d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d not found: ID does not exist" containerID="d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.258976 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d"} err="failed to get container status \"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d\": rpc error: code = NotFound desc = could not find container \"d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d\": container with ID starting with d8ef3143c3789b4413d5bf14187839a4bdae12b67fce8d578bbd94065a621c7d not found: ID does not exist" Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.402689 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:54 crc kubenswrapper[4751]: I0130 21:38:54.410679 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-577c4d4496-28rjx"] Jan 30 21:38:55 crc kubenswrapper[4751]: I0130 21:38:55.992513 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" path="/var/lib/kubelet/pods/49d33f4c-f33a-445b-90ab-795e750ecf2a/volumes" Jan 30 21:38:56 crc kubenswrapper[4751]: I0130 21:38:56.277755 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:56 crc kubenswrapper[4751]: I0130 21:38:56.280476 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:38:57 crc kubenswrapper[4751]: I0130 21:38:57.415136 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:38:57 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:38:57 crc kubenswrapper[4751]: > Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.118734 4751 generic.go:334] "Generic (PLEG): container finished" podID="81228544-ce67-44f1-b4e0-6a218e154363" containerID="bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80" exitCode=0 Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.118782 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerDied","Data":"bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80"} Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.119059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"81228544-ce67-44f1-b4e0-6a218e154363","Type":"ContainerDied","Data":"443cd982273ccdaa55a785fc0ffbd0bf36ddc8ddfcc7c39c30424ccabdcf775b"} Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.119073 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443cd982273ccdaa55a785fc0ffbd0bf36ddc8ddfcc7c39c30424ccabdcf775b" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.208624 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345662 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.345691 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") pod \"81228544-ce67-44f1-b4e0-6a218e154363\" (UID: \"81228544-ce67-44f1-b4e0-6a218e154363\") " Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.346067 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.346715 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81228544-ce67-44f1-b4e0-6a218e154363-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.351234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.352567 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts" (OuterVolumeSpecName: "scripts") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.373718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt" (OuterVolumeSpecName: "kube-api-access-bbtqt") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "kube-api-access-bbtqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.418488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449043 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449074 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbtqt\" (UniqueName: \"kubernetes.io/projected/81228544-ce67-44f1-b4e0-6a218e154363-kube-api-access-bbtqt\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449085 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.449093 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.478586 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data" (OuterVolumeSpecName: "config-data") pod "81228544-ce67-44f1-b4e0-6a218e154363" (UID: "81228544-ce67-44f1-b4e0-6a218e154363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:38:58 crc kubenswrapper[4751]: I0130 21:38:58.550793 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81228544-ce67-44f1-b4e0-6a218e154363-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.128913 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.172679 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.189176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-55986d9fc9-zjsx4" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.192810 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.219449 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220014 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220033 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220054 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="init" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220060 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="init" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220069 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220075 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220094 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220099 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220121 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220127 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" Jan 30 21:38:59 crc kubenswrapper[4751]: E0130 21:38:59.220135 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220142 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220385 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="cinder-scheduler" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220403 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="81228544-ce67-44f1-b4e0-6a218e154363" containerName="probe" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220413 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api-log" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220422 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8978647-a7c1-4e25-b9c9-114227c06b39" containerName="dnsmasq-dns" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.220434 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d33f4c-f33a-445b-90ab-795e750ecf2a" containerName="barbican-api" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.221632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.229572 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.230681 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370787 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370892 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.370922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.371064 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-scripts\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.371093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lvg\" (UniqueName: \"kubernetes.io/projected/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-kube-api-access-d5lvg\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473493 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473518 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473607 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-scripts\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.473627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lvg\" (UniqueName: \"kubernetes.io/projected/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-kube-api-access-d5lvg\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.475042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.478724 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.482757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.484092 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-config-data\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.487799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-scripts\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.490897 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lvg\" (UniqueName: \"kubernetes.io/projected/927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56-kube-api-access-d5lvg\") pod \"cinder-scheduler-0\" (UID: \"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56\") " pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.558433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.710904 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.713013 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.716651 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cb7bq" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.717286 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.718157 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.750070 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895680 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.895788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.998762 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.998817 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.998882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:38:59 crc kubenswrapper[4751]: I0130 21:38:59.999000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.000482 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.006375 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.024925 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.042623 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"openstackclient\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.059916 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81228544-ce67-44f1-b4e0-6a218e154363" path="/var/lib/kubelet/pods/81228544-ce67-44f1-b4e0-6a218e154363/volumes" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.061247 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.066755 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.069079 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.083684 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.111866 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.120563 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: W0130 21:39:00.150119 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod927e4c2b_4fb5_4ccb_adeb_8847ea0c4c56.slice/crio-d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba WatchSource:0}: Error finding container d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba: Status 404 returned error can't find the container with id d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.153163 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.198993 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205402 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af93872a-62a1-407c-9932-2afb4313f457-openstack-config\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-openstack-config-secret\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205570 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mvp\" (UniqueName: \"kubernetes.io/projected/af93872a-62a1-407c-9932-2afb4313f457-kube-api-access-46mvp\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.205590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-combined-ca-bundle\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.242171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bcbb59b46-2xhmj" Jan 30 21:39:00 crc kubenswrapper[4751]: E0130 21:39:00.317936 4751 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 21:39:00 crc kubenswrapper[4751]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4d8da9bd-aba2-45b4-acc9-7fb085937e02_0(b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca" Netns:"/var/run/netns/7d09ce61-bb6f-4487-bb52-c83ca4231ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca;K8S_POD_UID=4d8da9bd-aba2-45b4-acc9-7fb085937e02" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4d8da9bd-aba2-45b4-acc9-7fb085937e02]: expected pod UID "4d8da9bd-aba2-45b4-acc9-7fb085937e02" but got "af93872a-62a1-407c-9932-2afb4313f457" from Kube API Jan 30 21:39:00 crc kubenswrapper[4751]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 21:39:00 crc kubenswrapper[4751]: > Jan 30 21:39:00 crc kubenswrapper[4751]: E0130 21:39:00.317999 4751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 21:39:00 crc kubenswrapper[4751]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4d8da9bd-aba2-45b4-acc9-7fb085937e02_0(b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca" Netns:"/var/run/netns/7d09ce61-bb6f-4487-bb52-c83ca4231ff9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=b887095ea0c872474f9d78a2358219c9876e6dd480dcbfe55a57315616eb56ca;K8S_POD_UID=4d8da9bd-aba2-45b4-acc9-7fb085937e02" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4d8da9bd-aba2-45b4-acc9-7fb085937e02]: expected pod UID "4d8da9bd-aba2-45b4-acc9-7fb085937e02" but got "af93872a-62a1-407c-9932-2afb4313f457" from Kube API Jan 30 21:39:00 crc kubenswrapper[4751]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 21:39:00 crc kubenswrapper[4751]: > pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af93872a-62a1-407c-9932-2afb4313f457-openstack-config\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-openstack-config-secret\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mvp\" (UniqueName: \"kubernetes.io/projected/af93872a-62a1-407c-9932-2afb4313f457-kube-api-access-46mvp\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.318660 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-combined-ca-bundle\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.319294 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af93872a-62a1-407c-9932-2afb4313f457-openstack-config\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.330185 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.330744 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-openstack-config-secret\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.330896 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5486cc9958-dvfn2" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" containerID="cri-o://5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23" gracePeriod=30 Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.331098 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5486cc9958-dvfn2" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" containerID="cri-o://744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f" gracePeriod=30 Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.339124 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af93872a-62a1-407c-9932-2afb4313f457-combined-ca-bundle\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.378672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mvp\" (UniqueName: \"kubernetes.io/projected/af93872a-62a1-407c-9932-2afb4313f457-kube-api-access-46mvp\") pod \"openstackclient\" (UID: \"af93872a-62a1-407c-9932-2afb4313f457\") " pod="openstack/openstackclient" Jan 30 21:39:00 crc kubenswrapper[4751]: I0130 21:39:00.467251 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.162905 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.187590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56","Type":"ContainerStarted","Data":"d99060427762460061aade093f19a0e2bb89cdcb1a82789a27c6ea17076cecba"} Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.191293 4751 generic.go:334] "Generic (PLEG): container finished" podID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerID="5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23" exitCode=143 Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.191393 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.192112 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerDied","Data":"5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23"} Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.197739 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4d8da9bd-aba2-45b4-acc9-7fb085937e02" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.201991 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:01 crc kubenswrapper[4751]: W0130 21:39:01.249468 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf93872a_62a1_407c_9932_2afb4313f457.slice/crio-739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74 WatchSource:0}: Error finding container 739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74: Status 404 returned error can't find the container with id 739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74 Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.372512 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.372958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.373014 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.373183 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") pod \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\" (UID: \"4d8da9bd-aba2-45b4-acc9-7fb085937e02\") " Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.381523 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.381578 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.381673 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2" (OuterVolumeSpecName: "kube-api-access-k8zl2") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "kube-api-access-k8zl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.422792 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4d8da9bd-aba2-45b4-acc9-7fb085937e02" (UID: "4d8da9bd-aba2-45b4-acc9-7fb085937e02"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475255 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475288 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8zl2\" (UniqueName: \"kubernetes.io/projected/4d8da9bd-aba2-45b4-acc9-7fb085937e02-kube-api-access-k8zl2\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475299 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.475306 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4d8da9bd-aba2-45b4-acc9-7fb085937e02-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:01 crc kubenswrapper[4751]: I0130 21:39:01.986981 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8da9bd-aba2-45b4-acc9-7fb085937e02" path="/var/lib/kubelet/pods/4d8da9bd-aba2-45b4-acc9-7fb085937e02/volumes" Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.203395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"af93872a-62a1-407c-9932-2afb4313f457","Type":"ContainerStarted","Data":"739a2a72ce761a3f373f6256f8497584526f26f3772ad8fb56df46573f893e74"} Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.204483 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.204564 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56","Type":"ContainerStarted","Data":"be4296f6b217dd2b689af14c4d2482756a70e0c8c65c70d369dec116324e23e7"} Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.264516 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4d8da9bd-aba2-45b4-acc9-7fb085937e02" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:02 crc kubenswrapper[4751]: I0130 21:39:02.678528 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="e741273e-caa0-4a2c-9ed0-6bae195052ce" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.218:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:39:03 crc kubenswrapper[4751]: I0130 21:39:03.216084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56","Type":"ContainerStarted","Data":"f88b06701c46651952fd41f6764053d26180a5f35e312a74895bb2817e3380bc"} Jan 30 21:39:03 crc kubenswrapper[4751]: I0130 21:39:03.329548 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 21:39:03 crc kubenswrapper[4751]: I0130 21:39:03.361386 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.361362185 podStartE2EDuration="4.361362185s" podCreationTimestamp="2026-01-30 21:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:03.247251799 +0000 UTC m=+1481.993074448" watchObservedRunningTime="2026-01-30 21:39:03.361362185 +0000 UTC m=+1482.107184834" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.241182 4751 generic.go:334] "Generic (PLEG): container finished" podID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerID="744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f" exitCode=0 Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.242755 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerDied","Data":"744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f"} Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.401443 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.558846 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567247 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567644 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567749 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.567830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") pod \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\" (UID: \"5089359d-290c-4b07-80e4-0c4c73ffa8cd\") " Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.572024 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs" (OuterVolumeSpecName: "logs") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.594507 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts" (OuterVolumeSpecName: "scripts") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.594644 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw" (OuterVolumeSpecName: "kube-api-access-hhxpw") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "kube-api-access-hhxpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.668410 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data" (OuterVolumeSpecName: "config-data") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671378 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671401 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671412 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhxpw\" (UniqueName: \"kubernetes.io/projected/5089359d-290c-4b07-80e4-0c4c73ffa8cd-kube-api-access-hhxpw\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.671421 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5089359d-290c-4b07-80e4-0c4c73ffa8cd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.708070 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.758581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.773428 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.773459 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:04 crc kubenswrapper[4751]: I0130 21:39:04.875602 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5089359d-290c-4b07-80e4-0c4c73ffa8cd" (UID: "5089359d-290c-4b07-80e4-0c4c73ffa8cd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:04.985181 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5089359d-290c-4b07-80e4-0c4c73ffa8cd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.261046 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5486cc9958-dvfn2" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.261088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5486cc9958-dvfn2" event={"ID":"5089359d-290c-4b07-80e4-0c4c73ffa8cd","Type":"ContainerDied","Data":"dd3c757afd0458b9ccac3e6359d964949a2b5e06b72b283eb20687517536ba8e"} Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.261124 4751 scope.go:117] "RemoveContainer" containerID="744360fb184e7a689ab217ce4f6709b0ff7ab37b1bf6dc2a42f1d4e37e6d2d8f" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.312121 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.313220 4751 scope.go:117] "RemoveContainer" containerID="5f448114a9e068f8f50004034c9e2ada11f4f45525f6ffe5d1ccd7a3167a8e23" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.325932 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5486cc9958-dvfn2"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:05.990062 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" path="/var/lib/kubelet/pods/5089359d-290c-4b07-80e4-0c4c73ffa8cd/volumes" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.358878 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:06 crc kubenswrapper[4751]: E0130 21:39:06.361396 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361422 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" Jan 30 21:39:06 crc kubenswrapper[4751]: E0130 21:39:06.361448 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361454 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361656 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-log" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.361670 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5089359d-290c-4b07-80e4-0c4c73ffa8cd" containerName="placement-api" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.362398 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.365895 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q572p" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.365939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.366059 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.387805 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.519476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.519845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.519968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.520093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.570178 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.571894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.577593 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.601254 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.603444 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621757 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.621873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.630728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.649307 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.649848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.669939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"heat-engine-6c448464db-8pmrl\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.685900 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.686896 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.687391 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.689650 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.720780 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723306 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723367 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723423 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723464 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723587 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.723685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.737604 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.752663 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825732 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825858 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825877 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825923 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.825998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.826018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.826087 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.826112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.830799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.831547 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.831835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.835914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.836160 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.839252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.840990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.847549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.853252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"heat-cfnapi-6b5fd5d955-5ksqz\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.853802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"dnsmasq-dns-688b9f5b49-g645r\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.903078 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.932821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934448 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.934483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.938270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.943460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.944768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:06 crc kubenswrapper[4751]: I0130 21:39:06.959168 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"heat-api-cf77776d-s5nbq\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.230234 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.392921 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.448835 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:39:07 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:39:07 crc kubenswrapper[4751]: > Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.693765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:39:07 crc kubenswrapper[4751]: W0130 21:39:07.722281 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c918a5e_396e_4f0a_a68e_babcb03f2f4f.slice/crio-170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0 WatchSource:0}: Error finding container 170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0: Status 404 returned error can't find the container with id 170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0 Jan 30 21:39:07 crc kubenswrapper[4751]: W0130 21:39:07.807841 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8bf4d1e_d4c4_419c_b85b_5553a4996b75.slice/crio-bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a WatchSource:0}: Error finding container bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a: Status 404 returned error can't find the container with id bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.817548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:07 crc kubenswrapper[4751]: I0130 21:39:07.954877 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.267282 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.268964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.284369 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.341438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerStarted","Data":"4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.341478 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerStarted","Data":"c52a6ec70b4ea98c373beb2d768ebee9efb71cdd7d6badafe947f067a081150e"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.342690 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.367307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf77776d-s5nbq" event={"ID":"7782d459-57bc-442e-a471-6c5839d6de47","Type":"ContainerStarted","Data":"0cdf7ad51a70ca5bfb45057c9c67833605be258614b207239214e50e206ee1c5"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.372267 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6c448464db-8pmrl" podStartSLOduration=2.372235491 podStartE2EDuration="2.372235491s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:08.367612456 +0000 UTC m=+1487.113435105" watchObservedRunningTime="2026-01-30 21:39:08.372235491 +0000 UTC m=+1487.118058140" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.387909 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.387939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerStarted","Data":"bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.388061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.398295 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerID="19625e5a680f498754e1957e0d693d69d11c0c30e1b3f7eadc11af86a948548e" exitCode=0 Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.398356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerDied","Data":"19625e5a680f498754e1957e0d693d69d11c0c30e1b3f7eadc11af86a948548e"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.398390 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerStarted","Data":"170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0"} Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.399453 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.400930 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.404373 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.413312 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.489899 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.490043 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.490079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.490240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.491739 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.500485 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.502109 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.521948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"nova-api-db-create-hx7xn\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.528284 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.588992 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592380 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592599 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.592780 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.594036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.609727 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.611395 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.614079 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"nova-api-2281-account-create-update-5l5m8\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.614230 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.642153 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.643299 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699170 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699519 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.699813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.706439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.712218 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.718511 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"nova-cell0-db-create-6d52w\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.726985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.802404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.802681 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.803668 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.809647 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.809900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.811118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.816800 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.821858 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"nova-cell0-cdda-account-create-update-xfmk4\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.823082 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.825175 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.841601 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.842759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.846287 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"nova-cell1-db-create-kj2ld\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.920046 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.920808 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.973187 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:08 crc kubenswrapper[4751]: I0130 21:39:08.976975 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.025813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.029870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.033570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.071062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"nova-cell1-6e74-account-create-update-gdfb4\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.312994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.320725 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.414015 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58dc6df599-nmmxw"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.418240 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.424028 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.424214 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.424283 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.452162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerStarted","Data":"1ad9b6e61a78b2578133fbe99f7b252248de03c94cde5f1d03ed88c81b727ad8"} Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.455923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58dc6df599-nmmxw"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.458780 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerStarted","Data":"eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8"} Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.458818 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.552861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwdb\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-kube-api-access-vnwdb\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-config-data\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-run-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-etc-swift\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553076 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-log-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553129 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-public-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-internal-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.553218 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-combined-ca-bundle\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.654895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnwdb\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-kube-api-access-vnwdb\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655061 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-config-data\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-run-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-etc-swift\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-log-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-public-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-internal-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.655350 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-combined-ca-bundle\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.665215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-public-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.665523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-log-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.666689 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-combined-ca-bundle\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.667654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9f02a32-18ed-4030-94d6-16f4d0feff52-run-httpd\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.679841 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-etc-swift\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.680622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-config-data\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.687788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9f02a32-18ed-4030-94d6-16f4d0feff52-internal-tls-certs\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.694515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnwdb\" (UniqueName: \"kubernetes.io/projected/b9f02a32-18ed-4030-94d6-16f4d0feff52-kube-api-access-vnwdb\") pod \"swift-proxy-58dc6df599-nmmxw\" (UID: \"b9f02a32-18ed-4030-94d6-16f4d0feff52\") " pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.712281 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" podStartSLOduration=3.71225873 podStartE2EDuration="3.71225873s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:09.505571995 +0000 UTC m=+1488.251394654" watchObservedRunningTime="2026-01-30 21:39:09.71225873 +0000 UTC m=+1488.458081389" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.742915 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.797346 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:09 crc kubenswrapper[4751]: I0130 21:39:09.964740 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.064814 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.122291 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.235056 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.315058 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.498100 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" event={"ID":"f7625d34-2ace-4774-89e4-72729d19ce99","Type":"ContainerStarted","Data":"ea783e09d225076eb9b5cb1322e598d579f1e2d1bcde0d5c5d107891850e5bd8"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.501784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerStarted","Data":"95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.501837 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerStarted","Data":"19c59b8a8fccb214b3cbd6a763ad4108fc2b451ba8149317366be3741657e0ba"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.507742 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kj2ld" event={"ID":"444e34d6-7904-405b-956e-d23aed56537e","Type":"ContainerStarted","Data":"fa79502d2e5e1b12d71a3aa518d74951310c1854db953285f0dd0bec57e202e2"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.514097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerStarted","Data":"2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.515707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" event={"ID":"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5","Type":"ContainerStarted","Data":"ee9d3fa4ee3aa958761c47e4d6945036a02c56588cf4b2622a33096fc40d3c2f"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.520185 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6d52w" event={"ID":"6f139e0b-3ae5-4d5c-aa87-f15d00373f98","Type":"ContainerStarted","Data":"1412d3e968a704f0b25e82ec780f504270d2155c0b3632b09d545841f20c56f1"} Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.549466 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hx7xn" podStartSLOduration=2.549443602 podStartE2EDuration="2.549443602s" podCreationTimestamp="2026-01-30 21:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:10.536079494 +0000 UTC m=+1489.281902143" watchObservedRunningTime="2026-01-30 21:39:10.549443602 +0000 UTC m=+1489.295266241" Jan 30 21:39:10 crc kubenswrapper[4751]: I0130 21:39:10.670636 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58dc6df599-nmmxw"] Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.531466 4751 generic.go:334] "Generic (PLEG): container finished" podID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerID="801374aeb4ac1cff7c0c384bd6f348009c3a008674d2c7a597e16dd316c97dcd" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.531648 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" event={"ID":"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5","Type":"ContainerDied","Data":"801374aeb4ac1cff7c0c384bd6f348009c3a008674d2c7a597e16dd316c97dcd"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.534167 4751 generic.go:334] "Generic (PLEG): container finished" podID="f7625d34-2ace-4774-89e4-72729d19ce99" containerID="819319f0811868394aa97eff76f3853ec44f21bc4e3fff54753bf1a73c6cb040" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.534222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" event={"ID":"f7625d34-2ace-4774-89e4-72729d19ce99","Type":"ContainerDied","Data":"819319f0811868394aa97eff76f3853ec44f21bc4e3fff54753bf1a73c6cb040"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.536058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58dc6df599-nmmxw" event={"ID":"b9f02a32-18ed-4030-94d6-16f4d0feff52","Type":"ContainerStarted","Data":"7cb7f976b28f6f8b7ee1f2e7f06487f11fa8eada6b3bdc7962a7361b72b14548"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.536088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58dc6df599-nmmxw" event={"ID":"b9f02a32-18ed-4030-94d6-16f4d0feff52","Type":"ContainerStarted","Data":"ea7a89d696450ca8fb338e8cd72613862a8a03b6f7bb703ee3898333c4a58ed1"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.537939 4751 generic.go:334] "Generic (PLEG): container finished" podID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerID="2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.537985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerDied","Data":"2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.540934 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerID="653c6822da8dfb62b9974deaabbf6807b6ceb59b253232e41aa972ac9d77b452" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.540990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6d52w" event={"ID":"6f139e0b-3ae5-4d5c-aa87-f15d00373f98","Type":"ContainerDied","Data":"653c6822da8dfb62b9974deaabbf6807b6ceb59b253232e41aa972ac9d77b452"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.545486 4751 generic.go:334] "Generic (PLEG): container finished" podID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerID="95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa" exitCode=0 Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.545584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerDied","Data":"95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.551163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kj2ld" event={"ID":"444e34d6-7904-405b-956e-d23aed56537e","Type":"ContainerDied","Data":"1b2d27c5fa8a33163c2a6acc216d5d997d31face25a7d5b27edce913d857e2cf"} Jan 30 21:39:11 crc kubenswrapper[4751]: I0130 21:39:11.552918 4751 generic.go:334] "Generic (PLEG): container finished" podID="444e34d6-7904-405b-956e-d23aed56537e" containerID="1b2d27c5fa8a33163c2a6acc216d5d997d31face25a7d5b27edce913d857e2cf" exitCode=0 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.564141 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.564822 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" containerID="cri-o://57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.565553 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" containerID="cri-o://645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.565543 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" containerID="cri-o://e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.565584 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" containerID="cri-o://b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b" gracePeriod=30 Jan 30 21:39:12 crc kubenswrapper[4751]: I0130 21:39:12.582805 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.424888 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.510957 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6989c95c85-6thsl" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.541237 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") pod \"f7625d34-2ace-4774-89e4-72729d19ce99\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.541411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") pod \"f7625d34-2ace-4774-89e4-72729d19ce99\" (UID: \"f7625d34-2ace-4774-89e4-72729d19ce99\") " Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.542529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7625d34-2ace-4774-89e4-72729d19ce99" (UID: "f7625d34-2ace-4774-89e4-72729d19ce99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.554589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm" (OuterVolumeSpecName: "kube-api-access-szjrm") pod "f7625d34-2ace-4774-89e4-72729d19ce99" (UID: "f7625d34-2ace-4774-89e4-72729d19ce99"). InnerVolumeSpecName "kube-api-access-szjrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.611692 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.611921 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-566dccff6-ddvxf" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" containerID="cri-o://2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc" gracePeriod=30 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.612047 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-566dccff6-ddvxf" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" containerID="cri-o://74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0" gracePeriod=30 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.633631 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" event={"ID":"f7625d34-2ace-4774-89e4-72729d19ce99","Type":"ContainerDied","Data":"ea783e09d225076eb9b5cb1322e598d579f1e2d1bcde0d5c5d107891850e5bd8"} Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.633674 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea783e09d225076eb9b5cb1322e598d579f1e2d1bcde0d5c5d107891850e5bd8" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.633743 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cdda-account-create-update-xfmk4" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.644576 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7625d34-2ace-4774-89e4-72729d19ce99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.644605 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szjrm\" (UniqueName: \"kubernetes.io/projected/f7625d34-2ace-4774-89e4-72729d19ce99-kube-api-access-szjrm\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.694190 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f" exitCode=0 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696555 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b" exitCode=2 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696564 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec" exitCode=0 Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696585 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f"} Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696619 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b"} Jan 30 21:39:13 crc kubenswrapper[4751]: I0130 21:39:13.696629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec"} Jan 30 21:39:14 crc kubenswrapper[4751]: I0130 21:39:14.714574 4751 generic.go:334] "Generic (PLEG): container finished" podID="11052d78-74b6-472a-aaba-513368f51ce3" containerID="74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0" exitCode=0 Jan 30 21:39:14 crc kubenswrapper[4751]: I0130 21:39:14.714664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerDied","Data":"74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0"} Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.155762 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:39:15 crc kubenswrapper[4751]: E0130 21:39:15.156219 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" containerName="mariadb-account-create-update" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.156235 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" containerName="mariadb-account-create-update" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.156499 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" containerName="mariadb-account-create-update" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.157193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.199598 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.201088 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.228936 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.250641 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.252179 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284893 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.284938 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.292501 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.303412 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386860 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.386991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.387008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.387026 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.413290 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.413656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.416615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.418511 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"heat-engine-d9fcd4c7f-gcp2z\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.489898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.489988 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490134 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490235 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.490262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.498977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.503362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.506858 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.507101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.508549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.514764 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.526504 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.526756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"heat-api-84f9b8dd8f-qtmlz\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.530127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"heat-cfnapi-6ffb596769-rgv47\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.534116 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:15 crc kubenswrapper[4751]: I0130 21:39:15.578521 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.495087 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.217:3000/\": dial tcp 10.217.0.217:3000: connect: connection refused" Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.759418 4751 generic.go:334] "Generic (PLEG): container finished" podID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerID="645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302" exitCode=0 Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.759452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302"} Jan 30 21:39:16 crc kubenswrapper[4751]: I0130 21:39:16.935534 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.028761 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.028975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" containerID="cri-o://d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8" gracePeriod=10 Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.231668 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: connect: connection refused" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.422084 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:39:17 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:39:17 crc kubenswrapper[4751]: > Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.749754 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.760533 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.779417 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.781038 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.783962 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.784114 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.788931 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerID="d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8" exitCode=0 Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.788976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerDied","Data":"d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8"} Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.803733 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.814750 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.819531 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.822194 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.822696 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.843030 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.844903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.844977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845094 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.845248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947810 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.947921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.948461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.951399 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.951449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.951495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.953745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.953795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.954235 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.970503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.971091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:17 crc kubenswrapper[4751]: I0130 21:39:17.972760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"heat-cfnapi-d6c877d68-9ktwv\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065313 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.065622 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.069553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.069761 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.069883 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.072542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.076033 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.084902 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"heat-api-6f4bd4b69-ntk8n\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.114079 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:18 crc kubenswrapper[4751]: I0130 21:39:18.138016 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:19 crc kubenswrapper[4751]: I0130 21:39:19.813179 4751 generic.go:334] "Generic (PLEG): container finished" podID="11052d78-74b6-472a-aaba-513368f51ce3" containerID="2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc" exitCode=0 Jan 30 21:39:19 crc kubenswrapper[4751]: I0130 21:39:19.813244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerDied","Data":"2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc"} Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.766714 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.773437 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.773895 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695h699h687h695h689h5f7hfch66chcfh88h5dbh697h54bh545h5c7h697h587hc5h549hb7h88h55bh648h56fh67chb8hb7h54dh64ch667hdh9bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46mvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(af93872a-62a1-407c-9932-2afb4313f457): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.775013 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.851783 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6d52w" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.852341 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6d52w" event={"ID":"6f139e0b-3ae5-4d5c-aa87-f15d00373f98","Type":"ContainerDied","Data":"1412d3e968a704f0b25e82ec780f504270d2155c0b3632b09d545841f20c56f1"} Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.852412 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1412d3e968a704f0b25e82ec780f504270d2155c0b3632b09d545841f20c56f1" Jan 30 21:39:21 crc kubenswrapper[4751]: E0130 21:39:21.859002 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="af93872a-62a1-407c-9932-2afb4313f457" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876048 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") pod \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") pod \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\" (UID: \"6f139e0b-3ae5-4d5c-aa87-f15d00373f98\") " Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f139e0b-3ae5-4d5c-aa87-f15d00373f98" (UID: "6f139e0b-3ae5-4d5c-aa87-f15d00373f98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.876933 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.886461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq" (OuterVolumeSpecName: "kube-api-access-slrpq") pod "6f139e0b-3ae5-4d5c-aa87-f15d00373f98" (UID: "6f139e0b-3ae5-4d5c-aa87-f15d00373f98"). InnerVolumeSpecName "kube-api-access-slrpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:21 crc kubenswrapper[4751]: I0130 21:39:21.984724 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slrpq\" (UniqueName: \"kubernetes.io/projected/6f139e0b-3ae5-4d5c-aa87-f15d00373f98-kube-api-access-slrpq\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.085061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.163759 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.189951 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.190585 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") pod \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.190679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") pod \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\" (UID: \"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.191841 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" (UID: "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.192862 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.209482 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6" (OuterVolumeSpecName: "kube-api-access-wn5t6") pod "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" (UID: "bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5"). InnerVolumeSpecName "kube-api-access-wn5t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.233373 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: connect: connection refused" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.233957 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.294434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") pod \"444e34d6-7904-405b-956e-d23aed56537e\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.294972 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") pod \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295006 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") pod \"a8312bae-69c5-4c31-844e-42a90c18bfd3\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295051 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") pod \"a8312bae-69c5-4c31-844e-42a90c18bfd3\" (UID: \"a8312bae-69c5-4c31-844e-42a90c18bfd3\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295076 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") pod \"444e34d6-7904-405b-956e-d23aed56537e\" (UID: \"444e34d6-7904-405b-956e-d23aed56537e\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295116 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") pod \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\" (UID: \"a169fb7b-bcf8-44d8-8942-a42a4de6001d\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295645 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn5t6\" (UniqueName: \"kubernetes.io/projected/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5-kube-api-access-wn5t6\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.295678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8312bae-69c5-4c31-844e-42a90c18bfd3" (UID: "a8312bae-69c5-4c31-844e-42a90c18bfd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.296017 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "444e34d6-7904-405b-956e-d23aed56537e" (UID: "444e34d6-7904-405b-956e-d23aed56537e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.296086 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a169fb7b-bcf8-44d8-8942-a42a4de6001d" (UID: "a169fb7b-bcf8-44d8-8942-a42a4de6001d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.308469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr" (OuterVolumeSpecName: "kube-api-access-7srvr") pod "a169fb7b-bcf8-44d8-8942-a42a4de6001d" (UID: "a169fb7b-bcf8-44d8-8942-a42a4de6001d"). InnerVolumeSpecName "kube-api-access-7srvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.311130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8" (OuterVolumeSpecName: "kube-api-access-cdqx8") pod "444e34d6-7904-405b-956e-d23aed56537e" (UID: "444e34d6-7904-405b-956e-d23aed56537e"). InnerVolumeSpecName "kube-api-access-cdqx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.329677 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j" (OuterVolumeSpecName: "kube-api-access-rwn4j") pod "a8312bae-69c5-4c31-844e-42a90c18bfd3" (UID: "a8312bae-69c5-4c31-844e-42a90c18bfd3"). InnerVolumeSpecName "kube-api-access-rwn4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397316 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/444e34d6-7904-405b-956e-d23aed56537e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397371 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a169fb7b-bcf8-44d8-8942-a42a4de6001d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397382 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8312bae-69c5-4c31-844e-42a90c18bfd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397392 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwn4j\" (UniqueName: \"kubernetes.io/projected/a8312bae-69c5-4c31-844e-42a90c18bfd3-kube-api-access-rwn4j\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397404 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqx8\" (UniqueName: \"kubernetes.io/projected/444e34d6-7904-405b-956e-d23aed56537e-kube-api-access-cdqx8\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.397412 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7srvr\" (UniqueName: \"kubernetes.io/projected/a169fb7b-bcf8-44d8-8942-a42a4de6001d-kube-api-access-7srvr\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.531182 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.605862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.605925 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.605984 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606162 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.606201 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") pod \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\" (UID: \"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca\") " Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.613044 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.613341 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.616689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6" (OuterVolumeSpecName: "kube-api-access-xtcg6") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "kube-api-access-xtcg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.625542 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts" (OuterVolumeSpecName: "scripts") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.647937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716270 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716302 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716310 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtcg6\" (UniqueName: \"kubernetes.io/projected/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-kube-api-access-xtcg6\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716337 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.716344 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.979868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22749c7d-bedc-4a77-a9d7-81bf0f9c70ca","Type":"ContainerDied","Data":"566588f45edf254c38a8bd2cb4cecfcf41053da3f96016aee3abcebf59acf4a0"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.980177 4751 scope.go:117] "RemoveContainer" containerID="e9063569ccc33e77085e2b00951a57c6def385f3008e41cdbaf9636a0f5b353f" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.979948 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.986565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerStarted","Data":"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.986780 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" containerID="cri-o://2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" gracePeriod=60 Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.986771 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.990680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" event={"ID":"5a85ff98-c5c9-4735-ad9d-3c987976bd2f","Type":"ContainerDied","Data":"d4ad9ad89c73105ea7a484e1db33eb7c6d8564b633625c6640e82ad596737a10"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.990718 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ad9ad89c73105ea7a484e1db33eb7c6d8564b633625c6640e82ad596737a10" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.992156 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2281-account-create-update-5l5m8" Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.992438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2281-account-create-update-5l5m8" event={"ID":"a8312bae-69c5-4c31-844e-42a90c18bfd3","Type":"ContainerDied","Data":"19c59b8a8fccb214b3cbd6a763ad4108fc2b451ba8149317366be3741657e0ba"} Jan 30 21:39:22 crc kubenswrapper[4751]: I0130 21:39:22.992492 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c59b8a8fccb214b3cbd6a763ad4108fc2b451ba8149317366be3741657e0ba" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.011960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-kj2ld" event={"ID":"444e34d6-7904-405b-956e-d23aed56537e","Type":"ContainerDied","Data":"fa79502d2e5e1b12d71a3aa518d74951310c1854db953285f0dd0bec57e202e2"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.011997 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa79502d2e5e1b12d71a3aa518d74951310c1854db953285f0dd0bec57e202e2" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.012064 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-kj2ld" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.022463 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" podStartSLOduration=2.966384493 podStartE2EDuration="17.022444856s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="2026-01-30 21:39:07.821164371 +0000 UTC m=+1486.566987020" lastFinishedPulling="2026-01-30 21:39:21.877224734 +0000 UTC m=+1500.623047383" observedRunningTime="2026-01-30 21:39:23.005647606 +0000 UTC m=+1501.751470255" watchObservedRunningTime="2026-01-30 21:39:23.022444856 +0000 UTC m=+1501.768267505" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.027458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hx7xn" event={"ID":"a169fb7b-bcf8-44d8-8942-a42a4de6001d","Type":"ContainerDied","Data":"1ad9b6e61a78b2578133fbe99f7b252248de03c94cde5f1d03ed88c81b727ad8"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.027499 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad9b6e61a78b2578133fbe99f7b252248de03c94cde5f1d03ed88c81b727ad8" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.027551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hx7xn" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.041406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58dc6df599-nmmxw" event={"ID":"b9f02a32-18ed-4030-94d6-16f4d0feff52","Type":"ContainerStarted","Data":"51aee9e12458cb0b79d279b34f670c75a19df0eec0106e44ca9f07968777899b"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.042130 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.042215 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.050986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" event={"ID":"bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5","Type":"ContainerDied","Data":"ee9d3fa4ee3aa958761c47e4d6945036a02c56588cf4b2622a33096fc40d3c2f"} Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.051038 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9d3fa4ee3aa958761c47e4d6945036a02c56588cf4b2622a33096fc40d3c2f" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.051185 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6e74-account-create-update-gdfb4" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.058313 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.069977 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58dc6df599-nmmxw" podStartSLOduration=14.069961399 podStartE2EDuration="14.069961399s" podCreationTimestamp="2026-01-30 21:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:23.068591812 +0000 UTC m=+1501.814414531" watchObservedRunningTime="2026-01-30 21:39:23.069961399 +0000 UTC m=+1501.815784038" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.138543 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data" (OuterVolumeSpecName: "config-data") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.143473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" (UID: "22749c7d-bedc-4a77-a9d7-81bf0f9c70ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.149925 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.149955 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.263985 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.291029 4751 scope.go:117] "RemoveContainer" containerID="b5aac7d6f497e2328bb417b25d63cf92f6851dadf3db5e57dc476e250917965b" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.307858 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.341015 4751 scope.go:117] "RemoveContainer" containerID="645f73160e3f56e7ed531a836f7c9a1561da7bc5259b4db0442d52545c4d2302" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354575 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354636 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354816 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354950 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354970 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.354989 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") pod \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\" (UID: \"5a85ff98-c5c9-4735-ad9d-3c987976bd2f\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.365437 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.378020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f" (OuterVolumeSpecName: "kube-api-access-z4z8f") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "kube-api-access-z4z8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.448506 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.453664 4751 scope.go:117] "RemoveContainer" containerID="57c96884c7e80a6d477536792bb89b73f1542edc832a38be9d3a266693f347ec" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460501 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460779 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460842 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.460883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") pod \"11052d78-74b6-472a-aaba-513368f51ce3\" (UID: \"11052d78-74b6-472a-aaba-513368f51ce3\") " Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.461487 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4z8f\" (UniqueName: \"kubernetes.io/projected/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-kube-api-access-z4z8f\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.476628 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477065 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477094 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="init" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477100 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="init" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477114 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477120 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477239 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477246 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477257 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477263 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477279 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477284 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477293 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477299 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477318 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477336 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477352 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477358 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477375 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444e34d6-7904-405b-956e-d23aed56537e" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477381 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="444e34d6-7904-405b-956e-d23aed56537e" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477391 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477396 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477410 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477417 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" Jan 30 21:39:23 crc kubenswrapper[4751]: E0130 21:39:23.477430 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477435 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477616 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477632 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="proxy-httpd" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477641 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="11052d78-74b6-472a-aaba-513368f51ce3" containerName="neutron-api" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477653 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477661 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477675 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="sg-core" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477684 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="444e34d6-7904-405b-956e-d23aed56537e" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477690 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-notification-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477699 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" containerName="ceilometer-central-agent" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477711 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" containerName="mariadb-account-create-update" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477721 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" containerName="mariadb-database-create" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.477730 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" containerName="dnsmasq-dns" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.479741 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.484179 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.485652 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.559154 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.575942 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w" (OuterVolumeSpecName: "kube-api-access-29d9w") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "kube-api-access-29d9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.576982 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.577462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.578665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.578728 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.610839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.629244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.643722 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.643903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.645679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648527 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648840 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648854 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29d9w\" (UniqueName: \"kubernetes.io/projected/11052d78-74b6-472a-aaba-513368f51ce3-kube-api-access-29d9w\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648865 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648879 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.648890 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.664109 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.684256 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.701152 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.713836 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.747658 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.750728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.751380 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.752299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.752859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.771625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.772381 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.772384 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.784198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.835978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"ceilometer-0\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.858005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config" (OuterVolumeSpecName: "config") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.891832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.903516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config" (OuterVolumeSpecName: "config") pod "5a85ff98-c5c9-4735-ad9d-3c987976bd2f" (UID: "5a85ff98-c5c9-4735-ad9d-3c987976bd2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.943985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.962642 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.962674 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.962686 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a85ff98-c5c9-4735-ad9d-3c987976bd2f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.978739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "11052d78-74b6-472a-aaba-513368f51ce3" (UID: "11052d78-74b6-472a-aaba-513368f51ce3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:23 crc kubenswrapper[4751]: I0130 21:39:23.992158 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22749c7d-bedc-4a77-a9d7-81bf0f9c70ca" path="/var/lib/kubelet/pods/22749c7d-bedc-4a77-a9d7-81bf0f9c70ca/volumes" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.064661 4751 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11052d78-74b6-472a-aaba-513368f51ce3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.156969 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157012 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157055 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157888 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.157944 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" gracePeriod=600 Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.167143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerStarted","Data":"c0307f0807d895bc4c4c81ee028a2f34849a32fd2400b791f772ab65d779a108"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.173932 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerStarted","Data":"1364dfb35f78bd1c1c6c4e97299ac2e166c205513eddfbb1858b9264a7b65646"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.208736 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerStarted","Data":"11e830bec54790dfa3e122cbf706b934f604f7df0b29532d7a71f254e054bb5e"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.242250 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerStarted","Data":"f7aa37517cca46d92e89c8de2e90e7c2581e4c73761a48cd01499fa3a6ce3c18"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.254165 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566dccff6-ddvxf" event={"ID":"11052d78-74b6-472a-aaba-513368f51ce3","Type":"ContainerDied","Data":"bff81fd2907d366a655d26ebdf3a255c3bffa93ae91269d7fa674f369fb98f34"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.254221 4751 scope.go:117] "RemoveContainer" containerID="74aefce86656a68e812b38f7658b2359076b62078ffd0f3974807d58363f94b0" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.254313 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566dccff6-ddvxf" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.256336 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerStarted","Data":"499d3637c3e03f2b7dc0a86e62ae72f328746856d2c5b4b97226255304ddbec8"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.297572 4751 scope.go:117] "RemoveContainer" containerID="2ba96d5744b69d3f9276be5b0e9715862e0020d80034d236fcecc5d9420b54cc" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.313956 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf77776d-s5nbq" event={"ID":"7782d459-57bc-442e-a471-6c5839d6de47","Type":"ContainerStarted","Data":"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa"} Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.314103 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-cf77776d-s5nbq" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" containerID="cri-o://674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" gracePeriod=60 Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.314318 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.325842 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-hb44m" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.330236 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.346352 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.349688 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.355734 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.356345 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5j6gn" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.358618 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.383310 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.400712 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-566dccff6-ddvxf"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.401415 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.404301 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cf77776d-s5nbq" podStartSLOduration=4.491048518 podStartE2EDuration="18.404283226s" podCreationTimestamp="2026-01-30 21:39:06 +0000 UTC" firstStartedPulling="2026-01-30 21:39:07.964215272 +0000 UTC m=+1486.710037921" lastFinishedPulling="2026-01-30 21:39:21.87744998 +0000 UTC m=+1500.623272629" observedRunningTime="2026-01-30 21:39:24.339397748 +0000 UTC m=+1503.085220397" watchObservedRunningTime="2026-01-30 21:39:24.404283226 +0000 UTC m=+1503.150105875" Jan 30 21:39:24 crc kubenswrapper[4751]: E0130 21:39:24.432207 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.484767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.485015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.485039 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.485117 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.540437 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.550375 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-hb44m"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.587811 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.596584 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.599297 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.604827 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.609029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"nova-cell0-conductor-db-sync-47sz5\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.699060 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.804667 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.815877 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:24 crc kubenswrapper[4751]: I0130 21:39:24.819779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.291524 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.362359 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.489948 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" exitCode=0 Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.490037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.490100 4751 scope.go:117] "RemoveContainer" containerID="589b659983c64eaeb9431668de4131b84f85d7d4aaf79c3e0b75a24b0812e09e" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.491005 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:39:25 crc kubenswrapper[4751]: E0130 21:39:25.491399 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.492058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerStarted","Data":"db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.492122 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.494058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerStarted","Data":"702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.494225 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.495747 4751 generic.go:334] "Generic (PLEG): container finished" podID="7782d459-57bc-442e-a471-6c5839d6de47" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" exitCode=0 Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.495844 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf77776d-s5nbq" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.496624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf77776d-s5nbq" event={"ID":"7782d459-57bc-442e-a471-6c5839d6de47","Type":"ContainerDied","Data":"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.498516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerStarted","Data":"f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.499420 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.502138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerStarted","Data":"11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.502980 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.510078 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"4884de2c448f76056d8317537ec7b098481217936fd9e2f04eb669de3c631faf"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527257 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527564 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.527606 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") pod \"7782d459-57bc-442e-a471-6c5839d6de47\" (UID: \"7782d459-57bc-442e-a471-6c5839d6de47\") " Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.528683 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerStarted","Data":"bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.529071 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.536049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4" (OuterVolumeSpecName: "kube-api-access-vwhz4") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "kube-api-access-vwhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.541546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerStarted","Data":"ee0d949c9abfb18e45a0aba7521f0154b8bf9089739c141933f8355d900aff65"} Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.550008 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.552454 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.552514 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhz4\" (UniqueName: \"kubernetes.io/projected/7782d459-57bc-442e-a471-6c5839d6de47-kube-api-access-vwhz4\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.590931 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-84f9b8dd8f-qtmlz" podStartSLOduration=10.590906117 podStartE2EDuration="10.590906117s" podCreationTimestamp="2026-01-30 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.56152023 +0000 UTC m=+1504.307342879" watchObservedRunningTime="2026-01-30 21:39:25.590906117 +0000 UTC m=+1504.336728766" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.640368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.660593 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.674085 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f4bd4b69-ntk8n" podStartSLOduration=8.674065084 podStartE2EDuration="8.674065084s" podCreationTimestamp="2026-01-30 21:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.628971226 +0000 UTC m=+1504.374793875" watchObservedRunningTime="2026-01-30 21:39:25.674065084 +0000 UTC m=+1504.419887733" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.706626 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data" (OuterVolumeSpecName: "config-data") pod "7782d459-57bc-442e-a471-6c5839d6de47" (UID: "7782d459-57bc-442e-a471-6c5839d6de47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.776980 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7782d459-57bc-442e-a471-6c5839d6de47-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.792814 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podStartSLOduration=10.792796914 podStartE2EDuration="10.792796914s" podCreationTimestamp="2026-01-30 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.680579789 +0000 UTC m=+1504.426402438" watchObservedRunningTime="2026-01-30 21:39:25.792796914 +0000 UTC m=+1504.538619563" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.818361 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" podStartSLOduration=8.818309348 podStartE2EDuration="8.818309348s" podCreationTimestamp="2026-01-30 21:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.757656363 +0000 UTC m=+1504.503479012" watchObservedRunningTime="2026-01-30 21:39:25.818309348 +0000 UTC m=+1504.564131997" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.903261 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6ffb596769-rgv47" podStartSLOduration=10.903240002 podStartE2EDuration="10.903240002s" podCreationTimestamp="2026-01-30 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:25.817772672 +0000 UTC m=+1504.563595321" watchObservedRunningTime="2026-01-30 21:39:25.903240002 +0000 UTC m=+1504.649062651" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.941381 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.970630 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-cf77776d-s5nbq"] Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.994106 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11052d78-74b6-472a-aaba-513368f51ce3" path="/var/lib/kubelet/pods/11052d78-74b6-472a-aaba-513368f51ce3/volumes" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.994893 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a85ff98-c5c9-4735-ad9d-3c987976bd2f" path="/var/lib/kubelet/pods/5a85ff98-c5c9-4735-ad9d-3c987976bd2f/volumes" Jan 30 21:39:25 crc kubenswrapper[4751]: I0130 21:39:25.995756 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7782d459-57bc-442e-a471-6c5839d6de47" path="/var/lib/kubelet/pods/7782d459-57bc-442e-a471-6c5839d6de47/volumes" Jan 30 21:39:26 crc kubenswrapper[4751]: E0130 21:39:26.174237 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f68fda0_5c9c_46c2_82b3_633695c6e6f4.slice/crio-db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7782d459_57bc_442e_a471_6c5839d6de47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7782d459_57bc_442e_a471_6c5839d6de47.slice/crio-0cdf7ad51a70ca5bfb45057c9c67833605be258614b207239214e50e206ee1c5\": RecentStats: unable to find data in memory cache]" Jan 30 21:39:26 crc kubenswrapper[4751]: I0130 21:39:26.551490 4751 generic.go:334] "Generic (PLEG): container finished" podID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerID="db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928" exitCode=1 Jan 30 21:39:26 crc kubenswrapper[4751]: I0130 21:39:26.551783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerDied","Data":"db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928"} Jan 30 21:39:26 crc kubenswrapper[4751]: I0130 21:39:26.552513 4751 scope.go:117] "RemoveContainer" containerID="db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.206832 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.403233 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" probeResult="failure" output=< Jan 30 21:39:27 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:39:27 crc kubenswrapper[4751]: > Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.566773 4751 generic.go:334] "Generic (PLEG): container finished" podID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerID="bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee" exitCode=1 Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.566914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerDied","Data":"bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee"} Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.567975 4751 scope.go:117] "RemoveContainer" containerID="bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.816862 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:27 crc kubenswrapper[4751]: I0130 21:39:27.817582 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58dc6df599-nmmxw" podUID="b9f02a32-18ed-4030-94d6-16f4d0feff52" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.247175 4751 scope.go:117] "RemoveContainer" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.274990 4751 scope.go:117] "RemoveContainer" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" Jan 30 21:39:29 crc kubenswrapper[4751]: E0130 21:39:29.275721 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa\": container with ID starting with 674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa not found: ID does not exist" containerID="674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.275785 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa"} err="failed to get container status \"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa\": rpc error: code = NotFound desc = could not find container \"674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa\": container with ID starting with 674993f7c67c83b7a009edf32ccbcc46246beac25e1720e6fddf8537fcf7e8aa not found: ID does not exist" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.819002 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:29 crc kubenswrapper[4751]: I0130 21:39:29.827864 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58dc6df599-nmmxw" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.535004 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.579099 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.636097 4751 generic.go:334] "Generic (PLEG): container finished" podID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" exitCode=1 Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.636196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerDied","Data":"3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb"} Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.636232 4751 scope.go:117] "RemoveContainer" containerID="db89466ab47a413bfa604263bee5eb8c78e018c63643ceaff0f27f62ee26e928" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.637053 4751 scope.go:117] "RemoveContainer" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" Jan 30 21:39:30 crc kubenswrapper[4751]: E0130 21:39:30.637578 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-84f9b8dd8f-qtmlz_openstack(8f68fda0-5c9c-46c2-82b3-633695c6e6f4)\"" pod="openstack/heat-api-84f9b8dd8f-qtmlz" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.656190 4751 generic.go:334] "Generic (PLEG): container finished" podID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" exitCode=1 Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.656271 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerDied","Data":"cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382"} Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.657024 4751 scope.go:117] "RemoveContainer" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" Jan 30 21:39:30 crc kubenswrapper[4751]: E0130 21:39:30.657354 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6ffb596769-rgv47_openstack(826635d2-0549-4d63-84e2-3ba3cdf85db4)\"" pod="openstack/heat-cfnapi-6ffb596769-rgv47" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.661443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8"} Jan 30 21:39:30 crc kubenswrapper[4751]: I0130 21:39:30.733731 4751 scope.go:117] "RemoveContainer" containerID="bcfa1a5e063be4d843f69be862bdf4198ba058b09d410c418f13e640ec3964ee" Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.677415 4751 scope.go:117] "RemoveContainer" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" Jan 30 21:39:31 crc kubenswrapper[4751]: E0130 21:39:31.677971 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-84f9b8dd8f-qtmlz_openstack(8f68fda0-5c9c-46c2-82b3-633695c6e6f4)\"" pod="openstack/heat-api-84f9b8dd8f-qtmlz" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.681285 4751 scope.go:117] "RemoveContainer" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" Jan 30 21:39:31 crc kubenswrapper[4751]: E0130 21:39:31.681547 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6ffb596769-rgv47_openstack(826635d2-0549-4d63-84e2-3ba3cdf85db4)\"" pod="openstack/heat-cfnapi-6ffb596769-rgv47" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.685931 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c"} Jan 30 21:39:31 crc kubenswrapper[4751]: I0130 21:39:31.933839 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:39:32 crc kubenswrapper[4751]: I0130 21:39:32.708830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2"} Jan 30 21:39:33 crc kubenswrapper[4751]: I0130 21:39:33.746731 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:39:34 crc kubenswrapper[4751]: I0130 21:39:34.944490 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.017159 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.293665 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.367027 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.534395 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.571155 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.579728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.635836 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:35 crc kubenswrapper[4751]: I0130 21:39:35.636052 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6c448464db-8pmrl" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" containerID="cri-o://4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" gracePeriod=60 Jan 30 21:39:36 crc kubenswrapper[4751]: I0130 21:39:36.527969 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:36 crc kubenswrapper[4751]: I0130 21:39:36.679394 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.712526 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.716976 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.718965 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:39:36 crc kubenswrapper[4751]: E0130 21:39:36.719036 4751 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6c448464db-8pmrl" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:36 crc kubenswrapper[4751]: I0130 21:39:36.777155 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:39:37 crc kubenswrapper[4751]: I0130 21:39:37.815676 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-586n4" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" containerID="cri-o://21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f" gracePeriod=2 Jan 30 21:39:38 crc kubenswrapper[4751]: I0130 21:39:38.828653 4751 generic.go:334] "Generic (PLEG): container finished" podID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerID="21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f" exitCode=0 Jan 30 21:39:38 crc kubenswrapper[4751]: I0130 21:39:38.828732 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f"} Jan 30 21:39:39 crc kubenswrapper[4751]: I0130 21:39:39.976713 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:39:39 crc kubenswrapper[4751]: E0130 21:39:39.977047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.890116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6ffb596769-rgv47" event={"ID":"826635d2-0549-4d63-84e2-3ba3cdf85db4","Type":"ContainerDied","Data":"f7aa37517cca46d92e89c8de2e90e7c2581e4c73761a48cd01499fa3a6ce3c18"} Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.890782 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7aa37517cca46d92e89c8de2e90e7c2581e4c73761a48cd01499fa3a6ce3c18" Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.895156 4751 generic.go:334] "Generic (PLEG): container finished" podID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" exitCode=0 Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.895247 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerDied","Data":"4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478"} Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.898570 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-84f9b8dd8f-qtmlz" event={"ID":"8f68fda0-5c9c-46c2-82b3-633695c6e6f4","Type":"ContainerDied","Data":"11e830bec54790dfa3e122cbf706b934f604f7df0b29532d7a71f254e054bb5e"} Jan 30 21:39:41 crc kubenswrapper[4751]: I0130 21:39:41.898611 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e830bec54790dfa3e122cbf706b934f604f7df0b29532d7a71f254e054bb5e" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.250896 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.332024 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408771 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408840 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.408919 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409037 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409197 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") pod \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\" (UID: \"8f68fda0-5c9c-46c2-82b3-633695c6e6f4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.409239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") pod \"826635d2-0549-4d63-84e2-3ba3cdf85db4\" (UID: \"826635d2-0549-4d63-84e2-3ba3cdf85db4\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.422474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.442357 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d" (OuterVolumeSpecName: "kube-api-access-zq89d") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "kube-api-access-zq89d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.442984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.447020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk" (OuterVolumeSpecName: "kube-api-access-vzjlk") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "kube-api-access-vzjlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514257 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjlk\" (UniqueName: \"kubernetes.io/projected/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-kube-api-access-vzjlk\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514362 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq89d\" (UniqueName: \"kubernetes.io/projected/826635d2-0549-4d63-84e2-3ba3cdf85db4-kube-api-access-zq89d\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514378 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.514391 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.579356 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.590717 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.628477 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.648023 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.707755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data" (OuterVolumeSpecName: "config-data") pod "826635d2-0549-4d63-84e2-3ba3cdf85db4" (UID: "826635d2-0549-4d63-84e2-3ba3cdf85db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.718923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") pod \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719269 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") pod \"7fea9c34-deff-4930-87b5-c697eb7831d8\" (UID: \"7fea9c34-deff-4930-87b5-c697eb7831d8\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719415 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") pod \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.719491 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") pod \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\" (UID: \"f42767ff-b1d3-49e9-8b8d-39c65ea98978\") " Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.720167 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.720194 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.720206 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826635d2-0549-4d63-84e2-3ba3cdf85db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.721316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities" (OuterVolumeSpecName: "utilities") pod "f42767ff-b1d3-49e9-8b8d-39c65ea98978" (UID: "f42767ff-b1d3-49e9-8b8d-39c65ea98978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.723469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data" (OuterVolumeSpecName: "config-data") pod "8f68fda0-5c9c-46c2-82b3-633695c6e6f4" (UID: "8f68fda0-5c9c-46c2-82b3-633695c6e6f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.724415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g" (OuterVolumeSpecName: "kube-api-access-psx9g") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "kube-api-access-psx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.726837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.729662 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4" (OuterVolumeSpecName: "kube-api-access-wjpj4") pod "f42767ff-b1d3-49e9-8b8d-39c65ea98978" (UID: "f42767ff-b1d3-49e9-8b8d-39c65ea98978"). InnerVolumeSpecName "kube-api-access-wjpj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.769110 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.795517 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data" (OuterVolumeSpecName: "config-data") pod "7fea9c34-deff-4930-87b5-c697eb7831d8" (UID: "7fea9c34-deff-4930-87b5-c697eb7831d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.800504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f42767ff-b1d3-49e9-8b8d-39c65ea98978" (UID: "f42767ff-b1d3-49e9-8b8d-39c65ea98978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822346 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjpj4\" (UniqueName: \"kubernetes.io/projected/f42767ff-b1d3-49e9-8b8d-39c65ea98978-kube-api-access-wjpj4\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822378 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822389 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psx9g\" (UniqueName: \"kubernetes.io/projected/7fea9c34-deff-4930-87b5-c697eb7831d8-kube-api-access-psx9g\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822397 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822405 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f42767ff-b1d3-49e9-8b8d-39c65ea98978-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822414 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822424 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f68fda0-5c9c-46c2-82b3-633695c6e6f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.822434 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fea9c34-deff-4930-87b5-c697eb7831d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.917958 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"af93872a-62a1-407c-9932-2afb4313f457","Type":"ContainerStarted","Data":"4cffc45d3bce332d82d2f158e979f8c8bd0f529ff62b75a3b9cd0b8d62526da0"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.923437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerStarted","Data":"1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924109 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924037 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" containerID="cri-o://1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924053 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" containerID="cri-o://6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.924063 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" containerID="cri-o://9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.923846 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" containerID="cri-o://cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8" gracePeriod=30 Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.928992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c448464db-8pmrl" event={"ID":"7fea9c34-deff-4930-87b5-c697eb7831d8","Type":"ContainerDied","Data":"c52a6ec70b4ea98c373beb2d768ebee9efb71cdd7d6badafe947f067a081150e"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.929423 4751 scope.go:117] "RemoveContainer" containerID="4b306988a2380cbe1d94c21ad11cef6733288cda2590ed761b989755ac079478" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.929393 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c448464db-8pmrl" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.949487 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.304243092 podStartE2EDuration="42.949473308s" podCreationTimestamp="2026-01-30 21:39:00 +0000 UTC" firstStartedPulling="2026-01-30 21:39:01.253644004 +0000 UTC m=+1479.999466653" lastFinishedPulling="2026-01-30 21:39:41.89887422 +0000 UTC m=+1520.644696869" observedRunningTime="2026-01-30 21:39:42.938372411 +0000 UTC m=+1521.684195060" watchObservedRunningTime="2026-01-30 21:39:42.949473308 +0000 UTC m=+1521.695295957" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.961668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-586n4" event={"ID":"f42767ff-b1d3-49e9-8b8d-39c65ea98978","Type":"ContainerDied","Data":"e5a114a7a3f0e24e3bd57896ba3f86ef24a269e372ca019ce144df066c9e2be1"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.961760 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-586n4" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.969420 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-84f9b8dd8f-qtmlz" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.971679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerStarted","Data":"33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c"} Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.973078 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6ffb596769-rgv47" Jan 30 21:39:42 crc kubenswrapper[4751]: I0130 21:39:42.985598 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.719876811 podStartE2EDuration="19.985581896s" podCreationTimestamp="2026-01-30 21:39:23 +0000 UTC" firstStartedPulling="2026-01-30 21:39:24.713649991 +0000 UTC m=+1503.459472640" lastFinishedPulling="2026-01-30 21:39:41.979355076 +0000 UTC m=+1520.725177725" observedRunningTime="2026-01-30 21:39:42.958469719 +0000 UTC m=+1521.704292368" watchObservedRunningTime="2026-01-30 21:39:42.985581896 +0000 UTC m=+1521.731404535" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.006153 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-47sz5" podStartSLOduration=2.318406842 podStartE2EDuration="19.006133066s" podCreationTimestamp="2026-01-30 21:39:24 +0000 UTC" firstStartedPulling="2026-01-30 21:39:25.299350908 +0000 UTC m=+1504.045173557" lastFinishedPulling="2026-01-30 21:39:41.987077132 +0000 UTC m=+1520.732899781" observedRunningTime="2026-01-30 21:39:42.994318629 +0000 UTC m=+1521.740141288" watchObservedRunningTime="2026-01-30 21:39:43.006133066 +0000 UTC m=+1521.751955715" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.056855 4751 scope.go:117] "RemoveContainer" containerID="21ac36001cb714817d8ab855578743e4c5c5ddbfabc891012d0c87386994da9f" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.084877 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.115706 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6c448464db-8pmrl"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.145162 4751 scope.go:117] "RemoveContainer" containerID="25b90bc3624912ca065d76095b1602d95f2fe189d80a97579243f563f8a8fa45" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.163273 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.219919 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-586n4"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.228454 4751 scope.go:117] "RemoveContainer" containerID="e023642d7e8f9f5527a83bfc616f033c2d4851bd320c9d6b4ef572caee21ef7c" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.233876 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.244389 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6ffb596769-rgv47"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.255318 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.315611 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-84f9b8dd8f-qtmlz"] Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.981949 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2" exitCode=2 Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.981984 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8" exitCode=0 Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.986502 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" path="/var/lib/kubelet/pods/7fea9c34-deff-4930-87b5-c697eb7831d8/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.987064 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" path="/var/lib/kubelet/pods/826635d2-0549-4d63-84e2-3ba3cdf85db4/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.987626 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" path="/var/lib/kubelet/pods/8f68fda0-5c9c-46c2-82b3-633695c6e6f4/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.991428 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" path="/var/lib/kubelet/pods/f42767ff-b1d3-49e9-8b8d-39c65ea98978/volumes" Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.992173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2"} Jan 30 21:39:43 crc kubenswrapper[4751]: I0130 21:39:43.992205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8"} Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.008007 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c" exitCode=0 Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.008055 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c"} Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.119041 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.119936 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" containerID="cri-o://0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954" gracePeriod=30 Jan 30 21:39:45 crc kubenswrapper[4751]: I0130 21:39:45.120401 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" containerID="cri-o://404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f" gracePeriod=30 Jan 30 21:39:46 crc kubenswrapper[4751]: I0130 21:39:46.022210 4751 generic.go:334] "Generic (PLEG): container finished" podID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerID="0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954" exitCode=143 Jan 30 21:39:46 crc kubenswrapper[4751]: I0130 21:39:46.022301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerDied","Data":"0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954"} Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.098628 4751 generic.go:334] "Generic (PLEG): container finished" podID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerID="404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f" exitCode=0 Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.099275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerDied","Data":"404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f"} Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.190957 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328532 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328573 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328827 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328904 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.328996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") pod \"33588f5e-9224-4dd6-b689-0651c16d06bd\" (UID: \"33588f5e-9224-4dd6-b689-0651c16d06bd\") " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.330052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.330071 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs" (OuterVolumeSpecName: "logs") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.336713 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts" (OuterVolumeSpecName: "scripts") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.370291 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg" (OuterVolumeSpecName: "kube-api-access-mfdlg") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "kube-api-access-mfdlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.408899 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (OuterVolumeSpecName: "glance") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "pvc-03216ddc-ff0c-4c63-8e03-12380926233a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.417700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data" (OuterVolumeSpecName: "config-data") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432554 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432590 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432601 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33588f5e-9224-4dd6-b689-0651c16d06bd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432630 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" " Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432640 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfdlg\" (UniqueName: \"kubernetes.io/projected/33588f5e-9224-4dd6-b689-0651c16d06bd-kube-api-access-mfdlg\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.432649 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.445990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.452616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33588f5e-9224-4dd6-b689-0651c16d06bd" (UID: "33588f5e-9224-4dd6-b689-0651c16d06bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.480597 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.480789 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-03216ddc-ff0c-4c63-8e03-12380926233a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a") on node "crc" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.535263 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.535314 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.535348 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33588f5e-9224-4dd6-b689-0651c16d06bd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.541281 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.541728 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" containerID="cri-o://38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d" gracePeriod=30 Jan 30 21:39:49 crc kubenswrapper[4751]: I0130 21:39:49.543133 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" containerID="cri-o://63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031" gracePeriod=30 Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.119661 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33588f5e-9224-4dd6-b689-0651c16d06bd","Type":"ContainerDied","Data":"ddb9f9108d0450b2b505f7e37bbbf5c491b44e23c17e0903abd3c8bd376265b3"} Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.119977 4751 scope.go:117] "RemoveContainer" containerID="404ff17c9262956b5de69cff0c330fcf3cee139543dbe993153748a7e4076c5f" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.119735 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.138736 4751 generic.go:334] "Generic (PLEG): container finished" podID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerID="63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031" exitCode=143 Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.138798 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerDied","Data":"63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031"} Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.147248 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.154617 4751 scope.go:117] "RemoveContainer" containerID="0e6fc40159796236c1d006a538830d10bc94cb3396f193843abf8cb478b98954" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.158073 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.192259 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.195981 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196002 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196025 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-utilities" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196031 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-utilities" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196043 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196050 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196059 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196073 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196086 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196105 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196120 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-content" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196127 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="extract-content" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196144 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196152 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196180 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196186 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196196 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196202 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196412 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196421 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7782d459-57bc-442e-a471-6c5839d6de47" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196429 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-httpd" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196445 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" containerName="glance-log" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196460 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42767ff-b1d3-49e9-8b8d-39c65ea98978" containerName="registry-server" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196472 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fea9c34-deff-4930-87b5-c697eb7831d8" containerName="heat-engine" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196483 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196489 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: E0130 21:39:50.196731 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196738 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f68fda0-5c9c-46c2-82b3-633695c6e6f4" containerName="heat-api" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.196946 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="826635d2-0549-4d63-84e2-3ba3cdf85db4" containerName="heat-cfnapi" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.197789 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.207021 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.207266 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.210002 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357171 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-scripts\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bqj\" (UniqueName: \"kubernetes.io/projected/cef73daf-a49c-4b32-8ebc-fe0adf90df58-kube-api-access-44bqj\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-config-data\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357900 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.357964 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-logs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.358102 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-logs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-scripts\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460595 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44bqj\" (UniqueName: \"kubernetes.io/projected/cef73daf-a49c-4b32-8ebc-fe0adf90df58-kube-api-access-44bqj\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.460733 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-config-data\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.461299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-logs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.461499 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cef73daf-a49c-4b32-8ebc-fe0adf90df58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.473034 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.473081 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd5683b2fac8da06378b2d5eb72c7d0b6faa54e75d4b318b8013499a38483353/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.474117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-scripts\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.474340 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.474506 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-config-data\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.477209 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef73daf-a49c-4b32-8ebc-fe0adf90df58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.489816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bqj\" (UniqueName: \"kubernetes.io/projected/cef73daf-a49c-4b32-8ebc-fe0adf90df58-kube-api-access-44bqj\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.614948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-03216ddc-ff0c-4c63-8e03-12380926233a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-03216ddc-ff0c-4c63-8e03-12380926233a\") pod \"glance-default-external-api-0\" (UID: \"cef73daf-a49c-4b32-8ebc-fe0adf90df58\") " pod="openstack/glance-default-external-api-0" Jan 30 21:39:50 crc kubenswrapper[4751]: I0130 21:39:50.818608 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 21:39:51 crc kubenswrapper[4751]: I0130 21:39:51.455003 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 21:39:51 crc kubenswrapper[4751]: I0130 21:39:51.992955 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33588f5e-9224-4dd6-b689-0651c16d06bd" path="/var/lib/kubelet/pods/33588f5e-9224-4dd6-b689-0651c16d06bd/volumes" Jan 30 21:39:52 crc kubenswrapper[4751]: I0130 21:39:52.170007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cef73daf-a49c-4b32-8ebc-fe0adf90df58","Type":"ContainerStarted","Data":"21fda0ee4b452e4895e60b33082ee76ed41388393b36d1134ac1dfeb12851f9c"} Jan 30 21:39:52 crc kubenswrapper[4751]: I0130 21:39:52.170049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cef73daf-a49c-4b32-8ebc-fe0adf90df58","Type":"ContainerStarted","Data":"2cf83b9bee222cd565d9a4680e34f3affd4a5a161e9b021b056cbef0ae4e5192"} Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.183659 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cef73daf-a49c-4b32-8ebc-fe0adf90df58","Type":"ContainerStarted","Data":"e118abf9a7405284a471846ef041080bfb9c2acc14afcc380561ca808dce2a05"} Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.187670 4751 generic.go:334] "Generic (PLEG): container finished" podID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerID="38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d" exitCode=0 Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.187712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerDied","Data":"38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d"} Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.212382 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.212365628 podStartE2EDuration="3.212365628s" podCreationTimestamp="2026-01-30 21:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:53.211504735 +0000 UTC m=+1531.957327384" watchObservedRunningTime="2026-01-30 21:39:53.212365628 +0000 UTC m=+1531.958188277" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.320282 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439251 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439516 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439577 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.439597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.440191 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs" (OuterVolumeSpecName: "logs") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.440414 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.441881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"58e79616-9b52-47f9-a43e-01cbd487fbbd\" (UID: \"58e79616-9b52-47f9-a43e-01cbd487fbbd\") " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.442733 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.442752 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/58e79616-9b52-47f9-a43e-01cbd487fbbd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.445156 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts" (OuterVolumeSpecName: "scripts") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.447921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk" (OuterVolumeSpecName: "kube-api-access-4xqhk") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "kube-api-access-4xqhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.471251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (OuterVolumeSpecName: "glance") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "pvc-2b6fe968-3470-4548-ade6-9a3644e74227". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.477117 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.500177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data" (OuterVolumeSpecName: "config-data") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.504897 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58e79616-9b52-47f9-a43e-01cbd487fbbd" (UID: "58e79616-9b52-47f9-a43e-01cbd487fbbd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544529 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544748 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" " Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544863 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.544957 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xqhk\" (UniqueName: \"kubernetes.io/projected/58e79616-9b52-47f9-a43e-01cbd487fbbd-kube-api-access-4xqhk\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.545037 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.545120 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e79616-9b52-47f9-a43e-01cbd487fbbd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.576821 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.576982 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2b6fe968-3470-4548-ade6-9a3644e74227" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227") on node "crc" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.646815 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:53 crc kubenswrapper[4751]: I0130 21:39:53.958718 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.201352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"58e79616-9b52-47f9-a43e-01cbd487fbbd","Type":"ContainerDied","Data":"c29e1210fa3b7bf7fada16b3ee12edca743fdf4523588bf99fc7a7b6aa8b0f6d"} Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.201444 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.201719 4751 scope.go:117] "RemoveContainer" containerID="38d6ea6d17555bee86d24d0120b47bfe85898a3b92da2d1783b64c466b54936d" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.234731 4751 scope.go:117] "RemoveContainer" containerID="63bb4deba3a7aa55abb8828c7f8386975555baf8db7e5316f55f82adf4041031" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.244752 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.258140 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.276789 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: E0130 21:39:54.277279 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277297 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" Jan 30 21:39:54 crc kubenswrapper[4751]: E0130 21:39:54.277313 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277362 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277632 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-httpd" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.277660 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" containerName="glance-log" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.278837 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.281891 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.282170 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.324839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.362978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363056 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjl2v\" (UniqueName: \"kubernetes.io/projected/4dcf400d-5171-4388-bfbc-18d62a106a12-kube-api-access-wjl2v\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-logs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363166 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363339 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363371 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.363405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjl2v\" (UniqueName: \"kubernetes.io/projected/4dcf400d-5171-4388-bfbc-18d62a106a12-kube-api-access-wjl2v\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-logs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465372 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465396 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465506 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465553 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.465598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.466046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.466239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf400d-5171-4388-bfbc-18d62a106a12-logs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.466925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.468176 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.468210 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1439638cb8026f3fbd74a1d30ab35170ee3b35899e999b31e76311ef8605b4f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.471746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.471936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.472539 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.486473 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcf400d-5171-4388-bfbc-18d62a106a12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.492163 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjl2v\" (UniqueName: \"kubernetes.io/projected/4dcf400d-5171-4388-bfbc-18d62a106a12-kube-api-access-wjl2v\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.518224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b6fe968-3470-4548-ade6-9a3644e74227\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b6fe968-3470-4548-ade6-9a3644e74227\") pod \"glance-default-internal-api-0\" (UID: \"4dcf400d-5171-4388-bfbc-18d62a106a12\") " pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.598673 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 21:39:54 crc kubenswrapper[4751]: I0130 21:39:54.976586 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:39:54 crc kubenswrapper[4751]: E0130 21:39:54.977629 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:39:55 crc kubenswrapper[4751]: I0130 21:39:55.262969 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 21:39:55 crc kubenswrapper[4751]: I0130 21:39:55.992425 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e79616-9b52-47f9-a43e-01cbd487fbbd" path="/var/lib/kubelet/pods/58e79616-9b52-47f9-a43e-01cbd487fbbd/volumes" Jan 30 21:39:56 crc kubenswrapper[4751]: I0130 21:39:56.246263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4dcf400d-5171-4388-bfbc-18d62a106a12","Type":"ContainerStarted","Data":"a19bf02ecb70e0fe883ae7aae03f7533c26003e4e1c6c0dcf98df324824484d2"} Jan 30 21:39:56 crc kubenswrapper[4751]: I0130 21:39:56.246561 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4dcf400d-5171-4388-bfbc-18d62a106a12","Type":"ContainerStarted","Data":"507de3209d8750a346e93ee56fa1c608fdc16a418050cdd1c7a897a63d663a5a"} Jan 30 21:39:56 crc kubenswrapper[4751]: E0130 21:39:56.853784 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551aecfb_7969_4644_ac50_b8f4c63002d3.slice/crio-conmon-33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551aecfb_7969_4644_ac50_b8f4c63002d3.slice/crio-33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.272090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4dcf400d-5171-4388-bfbc-18d62a106a12","Type":"ContainerStarted","Data":"7619d54769c5b3e3db3335e6add7163abcd5624f641ef8fef9987ea8738ba811"} Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.276291 4751 generic.go:334] "Generic (PLEG): container finished" podID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerID="33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c" exitCode=0 Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.276357 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerDied","Data":"33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c"} Jan 30 21:39:57 crc kubenswrapper[4751]: I0130 21:39:57.300076 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.300055999 podStartE2EDuration="3.300055999s" podCreationTimestamp="2026-01-30 21:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:39:57.291981143 +0000 UTC m=+1536.037803822" watchObservedRunningTime="2026-01-30 21:39:57.300055999 +0000 UTC m=+1536.045878648" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.703893 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764540 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764720 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.764929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") pod \"551aecfb-7969-4644-ac50-b8f4c63002d3\" (UID: \"551aecfb-7969-4644-ac50-b8f4c63002d3\") " Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.770767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts" (OuterVolumeSpecName: "scripts") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.772512 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7" (OuterVolumeSpecName: "kube-api-access-fzpw7") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "kube-api-access-fzpw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.799305 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data" (OuterVolumeSpecName: "config-data") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.806888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "551aecfb-7969-4644-ac50-b8f4c63002d3" (UID: "551aecfb-7969-4644-ac50-b8f4c63002d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867335 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867391 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867403 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzpw7\" (UniqueName: \"kubernetes.io/projected/551aecfb-7969-4644-ac50-b8f4c63002d3-kube-api-access-fzpw7\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:58 crc kubenswrapper[4751]: I0130 21:39:58.867413 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551aecfb-7969-4644-ac50-b8f4c63002d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.300710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-47sz5" event={"ID":"551aecfb-7969-4644-ac50-b8f4c63002d3","Type":"ContainerDied","Data":"ee0d949c9abfb18e45a0aba7521f0154b8bf9089739c141933f8355d900aff65"} Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.300763 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee0d949c9abfb18e45a0aba7521f0154b8bf9089739c141933f8355d900aff65" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.300776 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-47sz5" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.412230 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:39:59 crc kubenswrapper[4751]: E0130 21:39:59.412702 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerName="nova-cell0-conductor-db-sync" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.412718 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerName="nova-cell0-conductor-db-sync" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.412949 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" containerName="nova-cell0-conductor-db-sync" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.413714 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.415659 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5j6gn" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.418712 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.425796 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.486958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.487234 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.487628 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.589547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.589676 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.589805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.593146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.594878 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.615596 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"nova-cell0-conductor-0\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:39:59 crc kubenswrapper[4751]: I0130 21:39:59.730279 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.268697 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.315898 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerStarted","Data":"798ef32e2296c6eacd1b15ce640930b8569775ee60956cf5d1bdd298f4ab3819"} Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.819699 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.821516 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.869530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:40:00 crc kubenswrapper[4751]: I0130 21:40:00.871250 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.328784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerStarted","Data":"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc"} Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.329880 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.329915 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.329924 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:01 crc kubenswrapper[4751]: I0130 21:40:01.355216 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.355197847 podStartE2EDuration="2.355197847s" podCreationTimestamp="2026-01-30 21:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:01.35048725 +0000 UTC m=+1540.096309899" watchObservedRunningTime="2026-01-30 21:40:01.355197847 +0000 UTC m=+1540.101020496" Jan 30 21:40:02 crc kubenswrapper[4751]: I0130 21:40:02.646983 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:03 crc kubenswrapper[4751]: I0130 21:40:03.369209 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:03 crc kubenswrapper[4751]: I0130 21:40:03.369563 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.002272 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.004489 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.379781 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" containerID="cri-o://598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" gracePeriod=30 Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.599291 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.599363 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.644286 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:04 crc kubenswrapper[4751]: I0130 21:40:04.648546 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:05 crc kubenswrapper[4751]: I0130 21:40:05.392559 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:05 crc kubenswrapper[4751]: I0130 21:40:05.392606 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.293050 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400719 4751 generic.go:334] "Generic (PLEG): container finished" podID="b058a895-614b-4e97-840e-dbb229de8109" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" exitCode=0 Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400783 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400783 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerDied","Data":"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc"} Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.400975 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b058a895-614b-4e97-840e-dbb229de8109","Type":"ContainerDied","Data":"798ef32e2296c6eacd1b15ce640930b8569775ee60956cf5d1bdd298f4ab3819"} Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.401022 4751 scope.go:117] "RemoveContainer" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.427393 4751 scope.go:117] "RemoveContainer" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" Jan 30 21:40:06 crc kubenswrapper[4751]: E0130 21:40:06.427863 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc\": container with ID starting with 598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc not found: ID does not exist" containerID="598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.427954 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc"} err="failed to get container status \"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc\": rpc error: code = NotFound desc = could not find container \"598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc\": container with ID starting with 598f9c4cee6ede7c7793d1d8c3f43cdbd49b11f5aee115fa89d9905b7f71c5dc not found: ID does not exist" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.456929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") pod \"b058a895-614b-4e97-840e-dbb229de8109\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.457125 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") pod \"b058a895-614b-4e97-840e-dbb229de8109\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.457267 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") pod \"b058a895-614b-4e97-840e-dbb229de8109\" (UID: \"b058a895-614b-4e97-840e-dbb229de8109\") " Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.462893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd" (OuterVolumeSpecName: "kube-api-access-g22rd") pod "b058a895-614b-4e97-840e-dbb229de8109" (UID: "b058a895-614b-4e97-840e-dbb229de8109"). InnerVolumeSpecName "kube-api-access-g22rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.489270 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b058a895-614b-4e97-840e-dbb229de8109" (UID: "b058a895-614b-4e97-840e-dbb229de8109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.492737 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data" (OuterVolumeSpecName: "config-data") pod "b058a895-614b-4e97-840e-dbb229de8109" (UID: "b058a895-614b-4e97-840e-dbb229de8109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.560663 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g22rd\" (UniqueName: \"kubernetes.io/projected/b058a895-614b-4e97-840e-dbb229de8109-kube-api-access-g22rd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.560716 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.560735 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058a895-614b-4e97-840e-dbb229de8109-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.750960 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.768044 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.779928 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: E0130 21:40:06.780587 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.780607 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.781202 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b058a895-614b-4e97-840e-dbb229de8109" containerName="nova-cell0-conductor-conductor" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.781992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.791296 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.793728 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5j6gn" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.793808 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.867565 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.868170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnp5c\" (UniqueName: \"kubernetes.io/projected/9bb304d7-db8e-4943-b0bc-d30a4332df91-kube-api-access-cnp5c\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.868245 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.971079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnp5c\" (UniqueName: \"kubernetes.io/projected/9bb304d7-db8e-4943-b0bc-d30a4332df91-kube-api-access-cnp5c\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.971142 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.971221 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.984053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:06 crc kubenswrapper[4751]: I0130 21:40:06.992931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb304d7-db8e-4943-b0bc-d30a4332df91-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.041005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnp5c\" (UniqueName: \"kubernetes.io/projected/9bb304d7-db8e-4943-b0bc-d30a4332df91-kube-api-access-cnp5c\") pod \"nova-cell0-conductor-0\" (UID: \"9bb304d7-db8e-4943-b0bc-d30a4332df91\") " pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.110938 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.414728 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.415022 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.623451 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 21:40:07 crc kubenswrapper[4751]: W0130 21:40:07.624850 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb304d7_db8e_4943_b0bc_d30a4332df91.slice/crio-98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007 WatchSource:0}: Error finding container 98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007: Status 404 returned error can't find the container with id 98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007 Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.654213 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:07 crc kubenswrapper[4751]: I0130 21:40:07.662069 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.006479 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b058a895-614b-4e97-840e-dbb229de8109" path="/var/lib/kubelet/pods/b058a895-614b-4e97-840e-dbb229de8109/volumes" Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.426066 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bb304d7-db8e-4943-b0bc-d30a4332df91","Type":"ContainerStarted","Data":"325f8f8f8ab243f01a7441d40332d9370852ec622074345f9b9f4fa08443a408"} Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.426834 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.426852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9bb304d7-db8e-4943-b0bc-d30a4332df91","Type":"ContainerStarted","Data":"98651a0114ca2c5a245b87fbd706bda83f03044f38a509e1f179b0fe7ac20007"} Jan 30 21:40:08 crc kubenswrapper[4751]: I0130 21:40:08.445076 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.445054484 podStartE2EDuration="2.445054484s" podCreationTimestamp="2026-01-30 21:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:08.444601742 +0000 UTC m=+1547.190424391" watchObservedRunningTime="2026-01-30 21:40:08.445054484 +0000 UTC m=+1547.190877133" Jan 30 21:40:09 crc kubenswrapper[4751]: I0130 21:40:09.980192 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:09 crc kubenswrapper[4751]: E0130 21:40:09.980831 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.160634 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.696786 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.698160 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.700039 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.700426 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.735023 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.813793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.813922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.813987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.814034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.884953 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.886426 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.891738 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.901808 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.913064 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917285 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917439 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.917531 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.921399 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.933408 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.951634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.952923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:12 crc kubenswrapper[4751]: I0130 21:40:12.965521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"nova-cell0-cell-mapping-x464h\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.005846 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.038818 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.039896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.039938 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.039977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.040072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.040135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.129376 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.131901 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142022 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.142283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.143845 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.149662 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.156152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.185586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"aodh-db-create-vm8dd\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.189509 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.192439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.229170 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.249877 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.251388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.269542 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279533 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279709 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.279878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.284671 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.316973 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.327422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.334720 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.343706 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.351657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381705 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381889 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381911 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.381930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.382042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.385460 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.386934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.387986 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.389713 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.396621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.397011 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.402593 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.423172 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"nova-api-0\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.465962 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.492830 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.492873 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.492939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493175 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493191 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.493209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.499571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.499995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.514398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.517527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.518099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.525821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"nova-metadata-0\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.528048 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.530427 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.535866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"aodh-0010-account-create-update-t2pkp\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.578648 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.607575 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.607667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.607725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.625279 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerID="1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a" exitCode=137 Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.625584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a"} Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.640507 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.664396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.665718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"nova-scheduler-0\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.673200 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.718503 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728397 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728452 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728525 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.728638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.729394 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.784199 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.827636 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832398 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832596 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.832687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.833726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.834344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.834934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.835487 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.836413 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.860712 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"dnsmasq-dns-568d7fd7cf-hpws7\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:13 crc kubenswrapper[4751]: I0130 21:40:13.905520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.011479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.289014 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.388795 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389301 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389467 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389530 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.389627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") pod \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\" (UID: \"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511\") " Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.390510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.390972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.396821 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn" (OuterVolumeSpecName: "kube-api-access-dqkxn") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "kube-api-access-dqkxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.401467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts" (OuterVolumeSpecName: "scripts") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.457835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493917 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493950 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqkxn\" (UniqueName: \"kubernetes.io/projected/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-kube-api-access-dqkxn\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493962 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493970 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.493979 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.499454 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581359 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581848 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581862 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581877 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581884 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581900 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581906 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: E0130 21:40:14.581934 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.581940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582153 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-notification-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582164 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="ceilometer-central-agent" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582175 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="sg-core" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.582186 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" containerName="proxy-httpd" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.583134 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.589967 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.590228 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.597981 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.615614 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.649912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.655911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerStarted","Data":"17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561"} Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.655948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerStarted","Data":"e386bc63e4c6fd0ed66a429bd207a6d77378c3bd08ff0d82d176f966ccb0e22d"} Jan 30 21:40:14 crc kubenswrapper[4751]: W0130 21:40:14.658786 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecc4c40f_f619_45e9_9e3d_baf3a3440ca2.slice/crio-3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0 WatchSource:0}: Error finding container 3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0: Status 404 returned error can't find the container with id 3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0 Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.669577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data" (OuterVolumeSpecName: "config-data") pod "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" (UID: "e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.676311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511","Type":"ContainerDied","Data":"4884de2c448f76056d8317537ec7b098481217936fd9e2f04eb669de3c631faf"} Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.676373 4751 scope.go:117] "RemoveContainer" containerID="1fc41ef015f899a24b1533353b6987552c41d8844cf586e1769d6cc1f39a0c6a" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.676516 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.693174 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701785 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701890 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.701986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.702051 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.722432 4751 scope.go:117] "RemoveContainer" containerID="6d06091377b2a8fc82610799f2cf8764b0bd657f28f4bab21ad30e6eb045d8d2" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.725382 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.735860 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x464h" podStartSLOduration=2.735835339 podStartE2EDuration="2.735835339s" podCreationTimestamp="2026-01-30 21:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:14.686036145 +0000 UTC m=+1553.431858794" watchObservedRunningTime="2026-01-30 21:40:14.735835339 +0000 UTC m=+1553.481657978" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.769171 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.797473 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.805865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.805973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.806023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.806058 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.810903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.811163 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.811994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.816691 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.820213 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.825298 4751 scope.go:117] "RemoveContainer" containerID="9c01cf6df6cfdacc48a6527bc7f77429e8ab33e1ed7d506f200e6257569ad93c" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.825678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"nova-cell1-conductor-db-sync-php6q\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.825832 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.826382 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.834064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.859522 4751 scope.go:117] "RemoveContainer" containerID="cfdea75b80256723e6e5c7537ac03523b96b0f4ab2bf0621af6d62950a93c5b8" Jan 30 21:40:14 crc kubenswrapper[4751]: I0130 21:40:14.928987 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.011169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015110 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015425 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.015788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.060553 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.094425 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.120577 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127089 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127156 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127195 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.127260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.128433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.128578 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.129113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.130118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.137135 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.140503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.141392 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.164125 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.167278 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:40:15 crc kubenswrapper[4751]: I0130 21:40:15.180008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " pod="openstack/ceilometer-0" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.467143 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.635644 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.707814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerStarted","Data":"84126f69388906672210fb0c0f5b79f09ceedc6fc66204e43ac117768cbfb6e9"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.719135 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerStarted","Data":"86f75d724ed17fcc60b6ecec83042af7088c44a17f4847501a5109d876e1517b"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.721906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerStarted","Data":"0603105be1749a5d28001ba428223b5f3edc9bed1bd5953cb98748d034ddf6d5"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.724896 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerStarted","Data":"6634a7ec1926462565ee356b01be6be019e25607c220ad7ffda5036ec6de3853"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.727945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerStarted","Data":"c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.727967 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerStarted","Data":"d8d5341ff0931f82eac8cd8b45647c0f80fe4a045d5a4f8a89839f0597b82ef4"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.731064 4751 generic.go:334] "Generic (PLEG): container finished" podID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerID="2cef368e1d9de3d2fb099a0412649b6c02ad1c0e0295100cf195bfffa3dcf34f" exitCode=0 Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.731122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerDied","Data":"2cef368e1d9de3d2fb099a0412649b6c02ad1c0e0295100cf195bfffa3dcf34f"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.731139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerStarted","Data":"90582d1ee044bf5a553f8b95b8254b85197e43f8fadf0df2ec897950782d7dc8"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.733014 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerStarted","Data":"3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.734429 4751 generic.go:334] "Generic (PLEG): container finished" podID="f243fc38-73c3-44ef-98b1-8c3086761087" containerID="6d49b61e92e6eef2d8083686a2afeb4d6ae7d468f3b7fa9aa7d17b2c30415daf" exitCode=0 Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.734534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vm8dd" event={"ID":"f243fc38-73c3-44ef-98b1-8c3086761087","Type":"ContainerDied","Data":"6d49b61e92e6eef2d8083686a2afeb4d6ae7d468f3b7fa9aa7d17b2c30415daf"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.734559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vm8dd" event={"ID":"f243fc38-73c3-44ef-98b1-8c3086761087","Type":"ContainerStarted","Data":"e9093f8aa53412c98d719622a4c9f60b070eeca780e5b7205a17f90369cae654"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:15.778162 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0010-account-create-update-t2pkp" podStartSLOduration=2.778143485 podStartE2EDuration="2.778143485s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:15.743757883 +0000 UTC m=+1554.489580532" watchObservedRunningTime="2026-01-30 21:40:15.778143485 +0000 UTC m=+1554.523966134" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.007742 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511" path="/var/lib/kubelet/pods/e0bb5e5f-7bde-4f5e-aa8f-ea56ede4c511/volumes" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.745646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerStarted","Data":"15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.748225 4751 generic.go:334] "Generic (PLEG): container finished" podID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerID="c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37" exitCode=0 Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.748287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerDied","Data":"c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.751830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerStarted","Data":"aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36"} Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.751974 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.772762 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-php6q" podStartSLOduration=2.772739833 podStartE2EDuration="2.772739833s" podCreationTimestamp="2026-01-30 21:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:16.763599138 +0000 UTC m=+1555.509421787" watchObservedRunningTime="2026-01-30 21:40:16.772739833 +0000 UTC m=+1555.518562482" Jan 30 21:40:16 crc kubenswrapper[4751]: I0130 21:40:16.793111 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" podStartSLOduration=3.793092078 podStartE2EDuration="3.793092078s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:16.785597547 +0000 UTC m=+1555.531420216" watchObservedRunningTime="2026-01-30 21:40:16.793092078 +0000 UTC m=+1555.538914727" Jan 30 21:40:17 crc kubenswrapper[4751]: I0130 21:40:17.096671 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:17 crc kubenswrapper[4751]: I0130 21:40:17.116074 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:17 crc kubenswrapper[4751]: I0130 21:40:17.285794 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:17 crc kubenswrapper[4751]: W0130 21:40:17.799869 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6b4242_4cff_47ed_a1a0_d13cb8cb3f08.slice/crio-45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a WatchSource:0}: Error finding container 45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a: Status 404 returned error can't find the container with id 45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.786706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a"} Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.951645 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.988690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") pod \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.988735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") pod \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\" (UID: \"8eb55a4f-933c-4871-b2d4-aed75e1449d7\") " Jan 30 21:40:18 crc kubenswrapper[4751]: I0130 21:40:18.990094 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8eb55a4f-933c-4871-b2d4-aed75e1449d7" (UID: "8eb55a4f-933c-4871-b2d4-aed75e1449d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.002522 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg" (OuterVolumeSpecName: "kube-api-access-zvbkg") pod "8eb55a4f-933c-4871-b2d4-aed75e1449d7" (UID: "8eb55a4f-933c-4871-b2d4-aed75e1449d7"). InnerVolumeSpecName "kube-api-access-zvbkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.098724 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8eb55a4f-933c-4871-b2d4-aed75e1449d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.098772 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvbkg\" (UniqueName: \"kubernetes.io/projected/8eb55a4f-933c-4871-b2d4-aed75e1449d7-kube-api-access-zvbkg\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.682946 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.712158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") pod \"f243fc38-73c3-44ef-98b1-8c3086761087\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.712440 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") pod \"f243fc38-73c3-44ef-98b1-8c3086761087\" (UID: \"f243fc38-73c3-44ef-98b1-8c3086761087\") " Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.713262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f243fc38-73c3-44ef-98b1-8c3086761087" (UID: "f243fc38-73c3-44ef-98b1-8c3086761087"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.717389 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f243fc38-73c3-44ef-98b1-8c3086761087-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.739718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8" (OuterVolumeSpecName: "kube-api-access-mrph8") pod "f243fc38-73c3-44ef-98b1-8c3086761087" (UID: "f243fc38-73c3-44ef-98b1-8c3086761087"). InnerVolumeSpecName "kube-api-access-mrph8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.808120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0010-account-create-update-t2pkp" event={"ID":"8eb55a4f-933c-4871-b2d4-aed75e1449d7","Type":"ContainerDied","Data":"d8d5341ff0931f82eac8cd8b45647c0f80fe4a045d5a4f8a89839f0597b82ef4"} Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.808157 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d5341ff0931f82eac8cd8b45647c0f80fe4a045d5a4f8a89839f0597b82ef4" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.808221 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0010-account-create-update-t2pkp" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.819456 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrph8\" (UniqueName: \"kubernetes.io/projected/f243fc38-73c3-44ef-98b1-8c3086761087-kube-api-access-mrph8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.833455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vm8dd" event={"ID":"f243fc38-73c3-44ef-98b1-8c3086761087","Type":"ContainerDied","Data":"e9093f8aa53412c98d719622a4c9f60b070eeca780e5b7205a17f90369cae654"} Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.833504 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9093f8aa53412c98d719622a4c9f60b070eeca780e5b7205a17f90369cae654" Jan 30 21:40:19 crc kubenswrapper[4751]: I0130 21:40:19.833567 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vm8dd" Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.849678 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerStarted","Data":"dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.856516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.861817 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerStarted","Data":"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.861882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerStarted","Data":"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.866906 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" containerID="cri-o://4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" gracePeriod=30 Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.866988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerStarted","Data":"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.867010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerStarted","Data":"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.867051 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" containerID="cri-o://104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" gracePeriod=30 Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.875212 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.4533252770000002 podStartE2EDuration="7.875195028s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="2026-01-30 21:40:15.118537578 +0000 UTC m=+1553.864360227" lastFinishedPulling="2026-01-30 21:40:19.540407329 +0000 UTC m=+1558.286229978" observedRunningTime="2026-01-30 21:40:20.872846295 +0000 UTC m=+1559.618668944" watchObservedRunningTime="2026-01-30 21:40:20.875195028 +0000 UTC m=+1559.621017697" Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.896243 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerStarted","Data":"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13"} Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.897203 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" gracePeriod=30 Jan 30 21:40:20 crc kubenswrapper[4751]: I0130 21:40:20.949806 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.179790092 podStartE2EDuration="8.949765456s" podCreationTimestamp="2026-01-30 21:40:12 +0000 UTC" firstStartedPulling="2026-01-30 21:40:14.687464413 +0000 UTC m=+1553.433287062" lastFinishedPulling="2026-01-30 21:40:19.457439777 +0000 UTC m=+1558.203262426" observedRunningTime="2026-01-30 21:40:20.895106561 +0000 UTC m=+1559.640929210" watchObservedRunningTime="2026-01-30 21:40:20.949765456 +0000 UTC m=+1559.695588105" Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.010080 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.549916974 podStartE2EDuration="8.01005381s" podCreationTimestamp="2026-01-30 21:40:13 +0000 UTC" firstStartedPulling="2026-01-30 21:40:15.079599545 +0000 UTC m=+1553.825422194" lastFinishedPulling="2026-01-30 21:40:19.539736371 +0000 UTC m=+1558.285559030" observedRunningTime="2026-01-30 21:40:20.918049856 +0000 UTC m=+1559.663872525" watchObservedRunningTime="2026-01-30 21:40:21.01005381 +0000 UTC m=+1559.755876459" Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.018623 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.252949331 podStartE2EDuration="9.018602849s" podCreationTimestamp="2026-01-30 21:40:12 +0000 UTC" firstStartedPulling="2026-01-30 21:40:14.68699659 +0000 UTC m=+1553.432819239" lastFinishedPulling="2026-01-30 21:40:19.452650108 +0000 UTC m=+1558.198472757" observedRunningTime="2026-01-30 21:40:20.933991113 +0000 UTC m=+1559.679813762" watchObservedRunningTime="2026-01-30 21:40:21.018602849 +0000 UTC m=+1559.764425498" Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.912063 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerDied","Data":"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2"} Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.912224 4751 generic.go:334] "Generic (PLEG): container finished" podID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" exitCode=143 Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.921722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7"} Jan 30 21:40:21 crc kubenswrapper[4751]: I0130 21:40:21.921762 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2"} Jan 30 21:40:22 crc kubenswrapper[4751]: I0130 21:40:22.936407 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerID="17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561" exitCode=0 Jan 30 21:40:22 crc kubenswrapper[4751]: I0130 21:40:22.936512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerDied","Data":"17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561"} Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.399410 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:40:23 crc kubenswrapper[4751]: E0130 21:40:23.400127 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" containerName="mariadb-database-create" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400486 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" containerName="mariadb-database-create" Jan 30 21:40:23 crc kubenswrapper[4751]: E0130 21:40:23.400510 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerName="mariadb-account-create-update" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400516 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerName="mariadb-account-create-update" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400767 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" containerName="mariadb-account-create-update" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.400788 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" containerName="mariadb-database-create" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.401592 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405439 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405687 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405802 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.405919 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.418532 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.518992 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.525916 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.525962 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.526050 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.526229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.627897 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.628016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.628075 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.628102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.634144 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.634916 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.635422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.651815 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"aodh-db-sync-dmqw2\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.677528 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.677576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.725853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.785440 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.785526 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.807664 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.831008 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.831047 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.912251 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936061 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936224 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936313 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.936404 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") pod \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\" (UID: \"b8bf4d1e-d4c4-419c-b85b-5553a4996b75\") " Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.954244 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj" (OuterVolumeSpecName: "kube-api-access-m9njj") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "kube-api-access-m9njj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.961966 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:23 crc kubenswrapper[4751]: I0130 21:40:23.994504 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:23 crc kubenswrapper[4751]: E0130 21:40:23.995107 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.018496 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.028767 4751 generic.go:334] "Generic (PLEG): container finished" podID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" exitCode=137 Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.028998 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerDied","Data":"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d"} Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035429 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b5fd5d955-5ksqz" event={"ID":"b8bf4d1e-d4c4-419c-b85b-5553a4996b75","Type":"ContainerDied","Data":"bd07fda2d7a8e027afec3c510cd01ca77e8506c7870cbb605108fca39849bb7a"} Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035481 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.035622 4751 scope.go:117] "RemoveContainer" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.039354 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9njj\" (UniqueName: \"kubernetes.io/projected/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-kube-api-access-m9njj\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.039612 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.039622 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.059110 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.059410 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" containerID="cri-o://eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8" gracePeriod=10 Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.120102 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data" (OuterVolumeSpecName: "config-data") pod "b8bf4d1e-d4c4-419c-b85b-5553a4996b75" (UID: "b8bf4d1e-d4c4-419c-b85b-5553a4996b75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.141397 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bf4d1e-d4c4-419c-b85b-5553a4996b75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.142519 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.207605 4751 scope.go:117] "RemoveContainer" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" Jan 30 21:40:24 crc kubenswrapper[4751]: E0130 21:40:24.208518 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d\": container with ID starting with 2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d not found: ID does not exist" containerID="2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.208568 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d"} err="failed to get container status \"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d\": rpc error: code = NotFound desc = could not find container \"2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d\": container with ID starting with 2904e4316cbba4a0972ec09c8b2f6ced0ccda32823e6c191ae184b163790800d not found: ID does not exist" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.370673 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.390760 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b5fd5d955-5ksqz"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.463863 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.718769 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.731037 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.759778 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.860416 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") pod \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\" (UID: \"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd\") " Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.870757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls" (OuterVolumeSpecName: "kube-api-access-gr8ls") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "kube-api-access-gr8ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.872657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts" (OuterVolumeSpecName: "scripts") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.900529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.911390 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data" (OuterVolumeSpecName: "config-data") pod "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" (UID: "0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962757 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962786 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962795 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr8ls\" (UniqueName: \"kubernetes.io/projected/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-kube-api-access-gr8ls\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:24 crc kubenswrapper[4751]: I0130 21:40:24.962805 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.046529 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerID="eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8" exitCode=0 Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.046603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerDied","Data":"eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.052201 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerStarted","Data":"e00a45ef9c5e0a4a179301e799c6bf54f6ff0d30e7efc8df176d7b1e8a8eca4c"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.085699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerStarted","Data":"21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.085920 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.125742 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.334642158 podStartE2EDuration="11.1257211s" podCreationTimestamp="2026-01-30 21:40:14 +0000 UTC" firstStartedPulling="2026-01-30 21:40:18.740416373 +0000 UTC m=+1557.486239022" lastFinishedPulling="2026-01-30 21:40:24.531495305 +0000 UTC m=+1563.277317964" observedRunningTime="2026-01-30 21:40:25.111258242 +0000 UTC m=+1563.857080891" watchObservedRunningTime="2026-01-30 21:40:25.1257211 +0000 UTC m=+1563.871543739" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.167356 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x464h" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.167636 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x464h" event={"ID":"0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd","Type":"ContainerDied","Data":"e386bc63e4c6fd0ed66a429bd207a6d77378c3bd08ff0d82d176f966ccb0e22d"} Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.167756 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e386bc63e4c6fd0ed66a429bd207a6d77378c3bd08ff0d82d176f966ccb0e22d" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.201184 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.201416 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" containerID="cri-o://a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" gracePeriod=30 Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.201539 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" containerID="cri-o://c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" gracePeriod=30 Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.349779 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.464458 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.512871 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.512932 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513042 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513119 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513147 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.513282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") pod \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\" (UID: \"5c918a5e-396e-4f0a-a68e-babcb03f2f4f\") " Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.523228 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27" (OuterVolumeSpecName: "kube-api-access-j6c27") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "kube-api-access-j6c27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.587202 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.601131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.617414 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.617462 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.617478 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6c27\" (UniqueName: \"kubernetes.io/projected/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-kube-api-access-j6c27\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.631765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.644552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config" (OuterVolumeSpecName: "config") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.645717 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c918a5e-396e-4f0a-a68e-babcb03f2f4f" (UID: "5c918a5e-396e-4f0a-a68e-babcb03f2f4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.721118 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.722575 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.722656 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c918a5e-396e-4f0a-a68e-babcb03f2f4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:25 crc kubenswrapper[4751]: I0130 21:40:25.994755 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" path="/var/lib/kubelet/pods/b8bf4d1e-d4c4-419c-b85b-5553a4996b75/volumes" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.201013 4751 generic.go:334] "Generic (PLEG): container finished" podID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" exitCode=143 Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.201081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerDied","Data":"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8"} Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.216387 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.217153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g645r" event={"ID":"5c918a5e-396e-4f0a-a68e-babcb03f2f4f","Type":"ContainerDied","Data":"170e124c674cb6797f80484f6f460a674ca21330ea1b870e78baeac2120834e0"} Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.217188 4751 scope.go:117] "RemoveContainer" containerID="eb0f3e62504ef376cb231daedb59d05d53a0fc2c2b6b6606cc3a08b14b2931e8" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.217359 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" containerID="cri-o://dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" gracePeriod=30 Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.263380 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.264482 4751 scope.go:117] "RemoveContainer" containerID="19625e5a680f498754e1957e0d693d69d11c0c30e1b3f7eadc11af86a948548e" Jan 30 21:40:26 crc kubenswrapper[4751]: I0130 21:40:26.273046 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g645r"] Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.234277 4751 generic.go:334] "Generic (PLEG): container finished" podID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerID="15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d" exitCode=0 Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.234666 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerDied","Data":"15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d"} Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.522064 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac78e815_d61d_4cb5_ae78_e2ed6c7478e1.slice/crio-dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e.scope\": RecentStats: unable to find data in memory cache]" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.871520 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872446 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="init" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872473 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="init" Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872509 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872518 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872557 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerName="nova-manage" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872565 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerName="nova-manage" Jan 30 21:40:27 crc kubenswrapper[4751]: E0130 21:40:27.872591 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872599 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872848 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" containerName="nova-manage" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872911 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bf4d1e-d4c4-419c-b85b-5553a4996b75" containerName="heat-cfnapi" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.872940 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" containerName="dnsmasq-dns" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.877193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.883227 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.889262 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.889431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.889487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.987835 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c918a5e-396e-4f0a-a68e-babcb03f2f4f" path="/var/lib/kubelet/pods/5c918a5e-396e-4f0a-a68e-babcb03f2f4f/volumes" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.990912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:27 crc kubenswrapper[4751]: I0130 21:40:27.991845 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.028833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"redhat-marketplace-28zk5\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.212410 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.249247 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" exitCode=0 Jan 30 21:40:28 crc kubenswrapper[4751]: I0130 21:40:28.249461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerDied","Data":"dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e"} Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.830589 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.831187 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.831611 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:40:28 crc kubenswrapper[4751]: E0130 21:40:28.831684 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.280491 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-php6q" event={"ID":"e675971e-ba0e-4630-bc1b-bdf47a433dd7","Type":"ContainerDied","Data":"86f75d724ed17fcc60b6ecec83042af7088c44a17f4847501a5109d876e1517b"} Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.280805 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f75d724ed17fcc60b6ecec83042af7088c44a17f4847501a5109d876e1517b" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.283140 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1","Type":"ContainerDied","Data":"6634a7ec1926462565ee356b01be6be019e25607c220ad7ffda5036ec6de3853"} Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.283163 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6634a7ec1926462565ee356b01be6be019e25607c220ad7ffda5036ec6de3853" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.511453 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.521036 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.653964 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") pod \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654021 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654157 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") pod \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654213 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654237 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654278 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") pod \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\" (UID: \"ac78e815-d61d-4cb5-ae78-e2ed6c7478e1\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.654316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") pod \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\" (UID: \"e675971e-ba0e-4630-bc1b-bdf47a433dd7\") " Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.659571 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts" (OuterVolumeSpecName: "scripts") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.660059 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f" (OuterVolumeSpecName: "kube-api-access-fvz4f") pod "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" (UID: "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1"). InnerVolumeSpecName "kube-api-access-fvz4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.660194 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm" (OuterVolumeSpecName: "kube-api-access-jz5pm") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "kube-api-access-jz5pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.713725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data" (OuterVolumeSpecName: "config-data") pod "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" (UID: "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.715027 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" (UID: "ac78e815-d61d-4cb5-ae78-e2ed6c7478e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.721995 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data" (OuterVolumeSpecName: "config-data") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.742083 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e675971e-ba0e-4630-bc1b-bdf47a433dd7" (UID: "e675971e-ba0e-4630-bc1b-bdf47a433dd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763899 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvz4f\" (UniqueName: \"kubernetes.io/projected/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-kube-api-access-fvz4f\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763934 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763945 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763953 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763962 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5pm\" (UniqueName: \"kubernetes.io/projected/e675971e-ba0e-4630-bc1b-bdf47a433dd7-kube-api-access-jz5pm\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763970 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.763979 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675971e-ba0e-4630-bc1b-bdf47a433dd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:30 crc kubenswrapper[4751]: I0130 21:40:30.797064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.227908 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.297076 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerStarted","Data":"0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298711 4751 generic.go:334] "Generic (PLEG): container finished" podID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" exitCode=0 Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298763 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298831 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerDied","Data":"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.298915 4751 scope.go:117] "RemoveContainer" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.299082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2","Type":"ContainerDied","Data":"3024c16ad9c3581d8359e24b5329d3dd5dbc599abb3804944a084a5026b4bea0"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.300318 4751 generic.go:334] "Generic (PLEG): container finished" podID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" exitCode=0 Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.300429 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-php6q" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.301249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.301273 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerStarted","Data":"7804348a0fdd5e612f2972159acdb604e779f2dc22bcf28432d660b17f23f501"} Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.301304 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.331972 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dmqw2" podStartSLOduration=2.545221765 podStartE2EDuration="8.331949511s" podCreationTimestamp="2026-01-30 21:40:23 +0000 UTC" firstStartedPulling="2026-01-30 21:40:24.544267677 +0000 UTC m=+1563.290090326" lastFinishedPulling="2026-01-30 21:40:30.330995423 +0000 UTC m=+1569.076818072" observedRunningTime="2026-01-30 21:40:31.310498566 +0000 UTC m=+1570.056321215" watchObservedRunningTime="2026-01-30 21:40:31.331949511 +0000 UTC m=+1570.077772160" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.369202 4751 scope.go:117] "RemoveContainer" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378476 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378544 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.378585 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") pod \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\" (UID: \"ecc4c40f-f619-45e9-9e3d-baf3a3440ca2\") " Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.379654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs" (OuterVolumeSpecName: "logs") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.380129 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.395944 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.398276 4751 scope.go:117] "RemoveContainer" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.403208 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d\": container with ID starting with c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d not found: ID does not exist" containerID="c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.403254 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d"} err="failed to get container status \"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d\": rpc error: code = NotFound desc = could not find container \"c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d\": container with ID starting with c26392c8774fa05fa8bed1df4d07f08e6743e53be396dfdb4a19f20047df588d not found: ID does not exist" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.403282 4751 scope.go:117] "RemoveContainer" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.403533 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8\": container with ID starting with a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8 not found: ID does not exist" containerID="a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.403576 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8"} err="failed to get container status \"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8\": rpc error: code = NotFound desc = could not find container \"a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8\": container with ID starting with a12f454f2fa43055f0701b691728e808dabdc8866c986b81cbe1626dd059cfd8 not found: ID does not exist" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.409118 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.412316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj" (OuterVolumeSpecName: "kube-api-access-dsttj") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "kube-api-access-dsttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.439898 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440458 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440481 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440494 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440502 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440536 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerName="nova-cell1-conductor-db-sync" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440545 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerName="nova-cell1-conductor-db-sync" Jan 30 21:40:31 crc kubenswrapper[4751]: E0130 21:40:31.440590 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440599 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440846 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" containerName="nova-scheduler-scheduler" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440871 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-log" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440907 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" containerName="nova-cell1-conductor-db-sync" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.440919 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" containerName="nova-api-api" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.452922 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.453049 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.458475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data" (OuterVolumeSpecName: "config-data") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.461766 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.483526 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsttj\" (UniqueName: \"kubernetes.io/projected/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-kube-api-access-dsttj\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.483552 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.502530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" (UID: "ecc4c40f-f619-45e9-9e3d-baf3a3440ca2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.585738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.585792 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.585913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.586054 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.684850 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.698830 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.699850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.700197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.700251 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.716080 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.725238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.752143 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.755273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.802740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.802798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.802865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrzd\" (UniqueName: \"kubernetes.io/projected/6c1153d5-e025-439d-9799-8bf38014a585-kube-api-access-6xrzd\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.805185 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.835462 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.848778 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.861614 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.866222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.871041 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.876319 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.909721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.909836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.910455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrzd\" (UniqueName: \"kubernetes.io/projected/6c1153d5-e025-439d-9799-8bf38014a585-kube-api-access-6xrzd\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.918286 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.921044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1153d5-e025-439d-9799-8bf38014a585-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.926291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.927805 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrzd\" (UniqueName: \"kubernetes.io/projected/6c1153d5-e025-439d-9799-8bf38014a585-kube-api-access-6xrzd\") pod \"nova-cell1-conductor-0\" (UID: \"6c1153d5-e025-439d-9799-8bf38014a585\") " pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.996308 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac78e815-d61d-4cb5-ae78-e2ed6c7478e1" path="/var/lib/kubelet/pods/ac78e815-d61d-4cb5-ae78-e2ed6c7478e1/volumes" Jan 30 21:40:31 crc kubenswrapper[4751]: I0130 21:40:31.996923 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc4c40f-f619-45e9-9e3d-baf3a3440ca2" path="/var/lib/kubelet/pods/ecc4c40f-f619-45e9-9e3d-baf3a3440ca2/volumes" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.014721 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.014949 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.015078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.016307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.069799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.121658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.121809 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.121862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.122057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.123165 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.128414 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.129418 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.140070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"nova-api-0\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.201206 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.392664 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.638231 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 21:40:32 crc kubenswrapper[4751]: I0130 21:40:32.764420 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:32 crc kubenswrapper[4751]: W0130 21:40:32.766498 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode039239f_9678_4ac5_bbd9_31120a7e569a.slice/crio-6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4 WatchSource:0}: Error finding container 6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4: Status 404 returned error can't find the container with id 6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4 Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.335931 4751 generic.go:334] "Generic (PLEG): container finished" podID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" exitCode=0 Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.336004 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.338509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerStarted","Data":"71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.338567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerStarted","Data":"48f5ec0c53f7e04ad3c659a4b9e04d6883529b029a3420700108431ca3b92a48"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.340902 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerStarted","Data":"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.340926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerStarted","Data":"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.340936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerStarted","Data":"6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.342907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c1153d5-e025-439d-9799-8bf38014a585","Type":"ContainerStarted","Data":"31909a59565029413bfc6272ccab934b2e947a948ea6c7ab91a11e78c7ca024a"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.342939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6c1153d5-e025-439d-9799-8bf38014a585","Type":"ContainerStarted","Data":"49ad8244bc99944ded6c99263f0addd92985672a9d9e6f6ae9d39addce8e392e"} Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.343588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.400533 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4005156530000002 podStartE2EDuration="2.400515653s" podCreationTimestamp="2026-01-30 21:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:33.392055546 +0000 UTC m=+1572.137878195" watchObservedRunningTime="2026-01-30 21:40:33.400515653 +0000 UTC m=+1572.146338302" Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.423786 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.423764356 podStartE2EDuration="2.423764356s" podCreationTimestamp="2026-01-30 21:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:33.409835203 +0000 UTC m=+1572.155657852" watchObservedRunningTime="2026-01-30 21:40:33.423764356 +0000 UTC m=+1572.169587005" Jan 30 21:40:33 crc kubenswrapper[4751]: I0130 21:40:33.439414 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.439393714 podStartE2EDuration="2.439393714s" podCreationTimestamp="2026-01-30 21:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:33.429108108 +0000 UTC m=+1572.174930757" watchObservedRunningTime="2026-01-30 21:40:33.439393714 +0000 UTC m=+1572.185216363" Jan 30 21:40:34 crc kubenswrapper[4751]: I0130 21:40:34.357295 4751 generic.go:334] "Generic (PLEG): container finished" podID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerID="0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553" exitCode=0 Jan 30 21:40:34 crc kubenswrapper[4751]: I0130 21:40:34.357431 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerDied","Data":"0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553"} Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.369032 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerStarted","Data":"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8"} Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.393105 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28zk5" podStartSLOduration=5.121456157 podStartE2EDuration="8.39308866s" podCreationTimestamp="2026-01-30 21:40:27 +0000 UTC" firstStartedPulling="2026-01-30 21:40:31.311475723 +0000 UTC m=+1570.057298372" lastFinishedPulling="2026-01-30 21:40:34.583108236 +0000 UTC m=+1573.328930875" observedRunningTime="2026-01-30 21:40:35.38637927 +0000 UTC m=+1574.132201909" watchObservedRunningTime="2026-01-30 21:40:35.39308866 +0000 UTC m=+1574.138911299" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.905297 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.925578 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.925959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.925993 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.926773 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") pod \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\" (UID: \"da95a3dd-69cf-4a27-af6c-1ac5b262c00a\") " Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.933188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts" (OuterVolumeSpecName: "scripts") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.933248 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8" (OuterVolumeSpecName: "kube-api-access-ttrf8") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "kube-api-access-ttrf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.976907 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:35 crc kubenswrapper[4751]: E0130 21:40:35.977527 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.981494 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:35 crc kubenswrapper[4751]: I0130 21:40:35.996516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data" (OuterVolumeSpecName: "config-data") pod "da95a3dd-69cf-4a27-af6c-1ac5b262c00a" (UID: "da95a3dd-69cf-4a27-af6c-1ac5b262c00a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030131 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030173 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030184 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttrf8\" (UniqueName: \"kubernetes.io/projected/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-kube-api-access-ttrf8\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.030194 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da95a3dd-69cf-4a27-af6c-1ac5b262c00a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.380606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dmqw2" event={"ID":"da95a3dd-69cf-4a27-af6c-1ac5b262c00a","Type":"ContainerDied","Data":"e00a45ef9c5e0a4a179301e799c6bf54f6ff0d30e7efc8df176d7b1e8a8eca4c"} Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.380998 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00a45ef9c5e0a4a179301e799c6bf54f6ff0d30e7efc8df176d7b1e8a8eca4c" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.380722 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dmqw2" Jan 30 21:40:36 crc kubenswrapper[4751]: I0130 21:40:36.926752 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 21:40:37 crc kubenswrapper[4751]: I0130 21:40:37.103788 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.213115 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.213183 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.509732 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:38 crc kubenswrapper[4751]: E0130 21:40:38.510298 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerName="aodh-db-sync" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.510317 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerName="aodh-db-sync" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.510606 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" containerName="aodh-db-sync" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.547428 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.547527 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.555368 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.555549 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.555661 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631423 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631478 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.631773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.733810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.734068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.734241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.734520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.741773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.767922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.771055 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.779021 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"aodh-0\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " pod="openstack/aodh-0" Jan 30 21:40:38 crc kubenswrapper[4751]: I0130 21:40:38.882455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:40:39 crc kubenswrapper[4751]: I0130 21:40:39.300442 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-28zk5" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" probeResult="failure" output=< Jan 30 21:40:39 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:40:39 crc kubenswrapper[4751]: > Jan 30 21:40:39 crc kubenswrapper[4751]: I0130 21:40:39.410953 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:39 crc kubenswrapper[4751]: W0130 21:40:39.416475 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a202f4_615a_4f93_86ef_46b6a994dd48.slice/crio-5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55 WatchSource:0}: Error finding container 5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55: Status 404 returned error can't find the container with id 5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55 Jan 30 21:40:40 crc kubenswrapper[4751]: I0130 21:40:40.437314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd"} Jan 30 21:40:40 crc kubenswrapper[4751]: I0130 21:40:40.437784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55"} Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.044312 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.044846 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" containerID="cri-o://6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.044914 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" containerID="cri-o://d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.045076 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" containerID="cri-o://423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.045152 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" containerID="cri-o://21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb" gracePeriod=30 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.057844 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.253:3000/\": EOF" Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.451092 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7" exitCode=2 Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.451149 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7"} Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.590450 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.926934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 21:40:41 crc kubenswrapper[4751]: I0130 21:40:41.971116 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.201640 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.201943 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464388 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb" exitCode=0 Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464430 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1" exitCode=0 Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb"} Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.464495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1"} Jan 30 21:40:42 crc kubenswrapper[4751]: I0130 21:40:42.549148 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 21:40:43 crc kubenswrapper[4751]: I0130 21:40:43.284108 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:43 crc kubenswrapper[4751]: I0130 21:40:43.284102 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:40:43 crc kubenswrapper[4751]: I0130 21:40:43.478473 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19"} Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.494874 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerID="d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2" exitCode=0 Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.495214 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2"} Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.574199 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685271 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685641 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685668 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685681 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685809 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.685915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") pod \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\" (UID: \"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08\") " Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.686561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.686640 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.690685 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd" (OuterVolumeSpecName: "kube-api-access-c68jd") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "kube-api-access-c68jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.693892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts" (OuterVolumeSpecName: "scripts") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.744276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798089 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c68jd\" (UniqueName: \"kubernetes.io/projected/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-kube-api-access-c68jd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798134 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798145 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.798155 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.821692 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.839474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data" (OuterVolumeSpecName: "config-data") pod "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" (UID: "8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.902094 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:44 crc kubenswrapper[4751]: I0130 21:40:44.902146 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.516667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb"} Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.521153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08","Type":"ContainerDied","Data":"45f2a90ecd716a5036b578fd96b6b2949721363815ae3cd0cb7a739c6367674a"} Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.521193 4751 scope.go:117] "RemoveContainer" containerID="21af121eb48e81f26f40336062b858e9071037b9c373a72bd1d118b16d6241fb" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.521389 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.589378 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.603148 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.644205 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645014 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645084 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645155 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645211 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645270 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645336 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: E0130 21:40:45.645402 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.645455 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646358 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-central-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646448 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="proxy-httpd" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646531 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="sg-core" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.646591 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" containerName="ceilometer-notification-agent" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.648637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.652751 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.655247 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.673497 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719285 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719411 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719440 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719470 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.719522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821672 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.821755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.822493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.822924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.827532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.828191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.828745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.828791 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.840798 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"ceilometer-0\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.858876 4751 scope.go:117] "RemoveContainer" containerID="423d5c0e48ebc4ebbb5d2c6df51425116510d1d23af248aa57b6d39b5308dda7" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.970493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:40:45 crc kubenswrapper[4751]: I0130 21:40:45.992102 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08" path="/var/lib/kubelet/pods/8e6b4242-4cff-47ed-a1a0-d13cb8cb3f08/volumes" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.071648 4751 scope.go:117] "RemoveContainer" containerID="d7b613c9fb5a4e06aefbc21a52f8dc19a4606e271e4fe01bbcecc755f41f5ef2" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.172902 4751 scope.go:117] "RemoveContainer" containerID="6c2f3f9f8e38206f31b75b809fd10381f8bc3e6137676fa3ac5b692f4ab1aec1" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539530 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerStarted","Data":"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598"} Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539866 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" containerID="cri-o://ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539910 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" containerID="cri-o://9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.539951 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" containerID="cri-o://f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.540000 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" containerID="cri-o://e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" gracePeriod=30 Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.576683 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.10204679 podStartE2EDuration="8.576639178s" podCreationTimestamp="2026-01-30 21:40:38 +0000 UTC" firstStartedPulling="2026-01-30 21:40:39.419309084 +0000 UTC m=+1578.165131733" lastFinishedPulling="2026-01-30 21:40:45.893901462 +0000 UTC m=+1584.639724121" observedRunningTime="2026-01-30 21:40:46.564256186 +0000 UTC m=+1585.310078845" watchObservedRunningTime="2026-01-30 21:40:46.576639178 +0000 UTC m=+1585.322461827" Jan 30 21:40:46 crc kubenswrapper[4751]: I0130 21:40:46.737233 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.553980 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.554262 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"952c0e68caba01e8a19179d8cae039bc4ad5143d5f3d94ca42acc30936e372b0"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556549 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" exitCode=0 Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556581 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" exitCode=0 Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556588 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" exitCode=0 Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19"} Jan 30 21:40:47 crc kubenswrapper[4751]: I0130 21:40:47.556653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd"} Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.293561 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.357996 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.542285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:48 crc kubenswrapper[4751]: I0130 21:40:48.571386 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970"} Jan 30 21:40:49 crc kubenswrapper[4751]: I0130 21:40:49.583005 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16"} Jan 30 21:40:49 crc kubenswrapper[4751]: I0130 21:40:49.583193 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28zk5" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" containerID="cri-o://5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" gracePeriod=2 Jan 30 21:40:49 crc kubenswrapper[4751]: I0130 21:40:49.975747 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:40:49 crc kubenswrapper[4751]: E0130 21:40:49.976400 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.269715 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.355536 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") pod \"488bc1bc-a729-42ea-8a7c-20ace387607e\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") pod \"488bc1bc-a729-42ea-8a7c-20ace387607e\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") pod \"488bc1bc-a729-42ea-8a7c-20ace387607e\" (UID: \"488bc1bc-a729-42ea-8a7c-20ace387607e\") " Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356280 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities" (OuterVolumeSpecName: "utilities") pod "488bc1bc-a729-42ea-8a7c-20ace387607e" (UID: "488bc1bc-a729-42ea-8a7c-20ace387607e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.356811 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.365654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44" (OuterVolumeSpecName: "kube-api-access-fpw44") pod "488bc1bc-a729-42ea-8a7c-20ace387607e" (UID: "488bc1bc-a729-42ea-8a7c-20ace387607e"). InnerVolumeSpecName "kube-api-access-fpw44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.374119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "488bc1bc-a729-42ea-8a7c-20ace387607e" (UID: "488bc1bc-a729-42ea-8a7c-20ace387607e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.458679 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpw44\" (UniqueName: \"kubernetes.io/projected/488bc1bc-a729-42ea-8a7c-20ace387607e-kube-api-access-fpw44\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.458709 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/488bc1bc-a729-42ea-8a7c-20ace387607e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.598830 4751 generic.go:334] "Generic (PLEG): container finished" podID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" exitCode=0 Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599194 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8"} Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28zk5" event={"ID":"488bc1bc-a729-42ea-8a7c-20ace387607e","Type":"ContainerDied","Data":"7804348a0fdd5e612f2972159acdb604e779f2dc22bcf28432d660b17f23f501"} Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599267 4751 scope.go:117] "RemoveContainer" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.599490 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28zk5" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.647587 4751 scope.go:117] "RemoveContainer" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.656644 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.676181 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28zk5"] Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.799882 4751 scope.go:117] "RemoveContainer" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.939695 4751 scope.go:117] "RemoveContainer" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" Jan 30 21:40:50 crc kubenswrapper[4751]: E0130 21:40:50.944593 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8\": container with ID starting with 5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8 not found: ID does not exist" containerID="5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.944644 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8"} err="failed to get container status \"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8\": rpc error: code = NotFound desc = could not find container \"5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8\": container with ID starting with 5f9d809f24645e3e4acdbf44e7be5b696ce88635a2da5f9c7567c94c089461e8 not found: ID does not exist" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.944671 4751 scope.go:117] "RemoveContainer" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" Jan 30 21:40:50 crc kubenswrapper[4751]: E0130 21:40:50.954495 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b\": container with ID starting with 86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b not found: ID does not exist" containerID="86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.954544 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b"} err="failed to get container status \"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b\": rpc error: code = NotFound desc = could not find container \"86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b\": container with ID starting with 86a72211e7e7fc26353c2a9a47f3913d1f47004de566a99860b994059b56da9b not found: ID does not exist" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.954571 4751 scope.go:117] "RemoveContainer" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" Jan 30 21:40:50 crc kubenswrapper[4751]: E0130 21:40:50.957986 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9\": container with ID starting with 139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9 not found: ID does not exist" containerID="139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9" Jan 30 21:40:50 crc kubenswrapper[4751]: I0130 21:40:50.958026 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9"} err="failed to get container status \"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9\": rpc error: code = NotFound desc = could not find container \"139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9\": container with ID starting with 139af0fd70a0e1831f25361c5ad49958ccff917c006c30b5f26d72eac8b63bd9 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.549136 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.575082 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.600046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") pod \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.600437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") pod \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.600582 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") pod \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\" (UID: \"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.608078 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5" (OuterVolumeSpecName: "kube-api-access-b97x5") pod "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" (UID: "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8"). InnerVolumeSpecName "kube-api-access-b97x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619271 4751 generic.go:334] "Generic (PLEG): container finished" podID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" exitCode=137 Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerDied","Data":"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619428 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5","Type":"ContainerDied","Data":"0603105be1749a5d28001ba428223b5f3edc9bed1bd5953cb98748d034ddf6d5"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619450 4751 scope.go:117] "RemoveContainer" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.619551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.635500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data" (OuterVolumeSpecName: "config-data") pod "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" (UID: "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.659253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" (UID: "ce9fb6c9-b64c-4470-9be6-f8686b59b0f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661204 4751 generic.go:334] "Generic (PLEG): container finished" podID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" exitCode=137 Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerDied","Data":"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce9fb6c9-b64c-4470-9be6-f8686b59b0f8","Type":"ContainerDied","Data":"84126f69388906672210fb0c0f5b79f09ceedc6fc66204e43ac117768cbfb6e9"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.661374 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.668680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerStarted","Data":"2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc"} Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.669510 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.673020 4751 scope.go:117] "RemoveContainer" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.706031 4751 scope.go:117] "RemoveContainer" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.707061 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7\": container with ID starting with 104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7 not found: ID does not exist" containerID="104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707094 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7"} err="failed to get container status \"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7\": rpc error: code = NotFound desc = could not find container \"104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7\": container with ID starting with 104eb5b3de599ec92d6566244c8bd9716b80d0669cf4726ec79bfd7817491fc7 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707116 4751 scope.go:117] "RemoveContainer" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.707591 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2\": container with ID starting with 4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2 not found: ID does not exist" containerID="4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707643 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2"} err="failed to get container status \"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2\": rpc error: code = NotFound desc = could not find container \"4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2\": container with ID starting with 4aefbe2f56c7f9dcf869877e8817164ce4745373ce695b6a57d0467b39716da2 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.707671 4751 scope.go:117] "RemoveContainer" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708527 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708611 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708944 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709004 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") pod \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\" (UID: \"ef384c2e-1483-4ad5-aaa8-96c4a347bcb5\") " Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709604 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709621 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b97x5\" (UniqueName: \"kubernetes.io/projected/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-kube-api-access-b97x5\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709632 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.709643 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs" (OuterVolumeSpecName: "logs") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.708500 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.82600271 podStartE2EDuration="6.708482662s" podCreationTimestamp="2026-01-30 21:40:45 +0000 UTC" firstStartedPulling="2026-01-30 21:40:46.733709775 +0000 UTC m=+1585.479532414" lastFinishedPulling="2026-01-30 21:40:50.616189717 +0000 UTC m=+1589.362012366" observedRunningTime="2026-01-30 21:40:51.690751196 +0000 UTC m=+1590.436573855" watchObservedRunningTime="2026-01-30 21:40:51.708482662 +0000 UTC m=+1590.454305321" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.750423 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.758625 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb" (OuterVolumeSpecName: "kube-api-access-cf8kb") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "kube-api-access-cf8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.771558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.782870 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.783254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data" (OuterVolumeSpecName: "config-data") pod "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" (UID: "ef384c2e-1483-4ad5-aaa8-96c4a347bcb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811717 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf8kb\" (UniqueName: \"kubernetes.io/projected/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-kube-api-access-cf8kb\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811747 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811756 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.811776 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.813846 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814285 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-content" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814296 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-content" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814343 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-utilities" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814349 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="extract-utilities" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814362 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814376 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814388 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814394 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814424 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814431 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.814443 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814450 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814685 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-log" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814706 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814724 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" containerName="registry-server" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.814737 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" containerName="nova-metadata-metadata" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.815742 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.823524 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.823837 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.823951 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.829434 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.859703 4751 scope.go:117] "RemoveContainer" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" Jan 30 21:40:51 crc kubenswrapper[4751]: E0130 21:40:51.860580 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13\": container with ID starting with 37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13 not found: ID does not exist" containerID="37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.860617 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13"} err="failed to get container status \"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13\": rpc error: code = NotFound desc = could not find container \"37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13\": container with ID starting with 37d52d20839d3d480c87a07910f0f2bbf2f866d3e6a0dfd6bce0c2617717ca13 not found: ID does not exist" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.913927 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.913986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.914080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8glx\" (UniqueName: \"kubernetes.io/projected/150d4911-b366-4c81-b4fa-b5c5e8cadc78-kube-api-access-x8glx\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.914108 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:51 crc kubenswrapper[4751]: I0130 21:40:51.914195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.017520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.017581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.018568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8glx\" (UniqueName: \"kubernetes.io/projected/150d4911-b366-4c81-b4fa-b5c5e8cadc78-kube-api-access-x8glx\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.018616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.018763 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.026289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.027008 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488bc1bc-a729-42ea-8a7c-20ace387607e" path="/var/lib/kubelet/pods/488bc1bc-a729-42ea-8a7c-20ace387607e/volumes" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.027175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.028788 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9fb6c9-b64c-4470-9be6-f8686b59b0f8" path="/var/lib/kubelet/pods/ce9fb6c9-b64c-4470-9be6-f8686b59b0f8/volumes" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.029411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.029450 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.029475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.033836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/150d4911-b366-4c81-b4fa-b5c5e8cadc78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.035656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8glx\" (UniqueName: \"kubernetes.io/projected/150d4911-b366-4c81-b4fa-b5c5e8cadc78-kube-api-access-x8glx\") pod \"nova-cell1-novncproxy-0\" (UID: \"150d4911-b366-4c81-b4fa-b5c5e8cadc78\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.047260 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.049178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.050678 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.051135 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.068839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.120954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.120999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.121090 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.121111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.121246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.160407 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.208343 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.209246 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.212066 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.212401 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224354 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.224772 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.230810 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.234573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.243456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.247438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"nova-metadata-0\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.444595 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.683154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.698718 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.707602 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.921381 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.923572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:52 crc kubenswrapper[4751]: I0130 21:40:52.951398 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.018260 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054573 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.054726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.158128 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.158188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160208 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160390 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160461 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160529 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.160725 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.162993 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.163303 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.163866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.163926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.183705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"dnsmasq-dns-f84f9ccf-z9wt9\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.325617 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.740512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"150d4911-b366-4c81-b4fa-b5c5e8cadc78","Type":"ContainerStarted","Data":"14ce6ffec99e0c80e4971861f03d514c9f7e3b18520d2d18102b41cfcbca8741"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.741130 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"150d4911-b366-4c81-b4fa-b5c5e8cadc78","Type":"ContainerStarted","Data":"e0362de9a7e0469b252bb7c768e54742f32662d53a68e945693c5681ad433159"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.756251 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerStarted","Data":"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.756287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerStarted","Data":"204fbf404d1c8ba52093e260239c68e9a0a2f21f814a66b0d5a430c499419aa4"} Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.774704 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.77468543 podStartE2EDuration="2.77468543s" podCreationTimestamp="2026-01-30 21:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:53.764318092 +0000 UTC m=+1592.510140751" watchObservedRunningTime="2026-01-30 21:40:53.77468543 +0000 UTC m=+1592.520508079" Jan 30 21:40:53 crc kubenswrapper[4751]: I0130 21:40:53.926475 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.000151 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef384c2e-1483-4ad5-aaa8-96c4a347bcb5" path="/var/lib/kubelet/pods/ef384c2e-1483-4ad5-aaa8-96c4a347bcb5/volumes" Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.767509 4751 generic.go:334] "Generic (PLEG): container finished" podID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerID="6917c598c5eba82a1b463890dd92dd5f9d24bd22527e450c4ff5bef9192d6678" exitCode=0 Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.767558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerDied","Data":"6917c598c5eba82a1b463890dd92dd5f9d24bd22527e450c4ff5bef9192d6678"} Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.768034 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerStarted","Data":"dee8c983aae4ef6924c6d9d77cb8b52f55a1c21e202b60331cafd82b2208a0d0"} Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.771087 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerStarted","Data":"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303"} Jan 30 21:40:54 crc kubenswrapper[4751]: I0130 21:40:54.810913 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.810898403 podStartE2EDuration="3.810898403s" podCreationTimestamp="2026-01-30 21:40:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:54.804244245 +0000 UTC m=+1593.550066894" watchObservedRunningTime="2026-01-30 21:40:54.810898403 +0000 UTC m=+1593.556721052" Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.662993 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.782191 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" containerID="cri-o://8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" gracePeriod=30 Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.783676 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerStarted","Data":"594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7"} Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.783709 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.784501 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" containerID="cri-o://512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" gracePeriod=30 Jan 30 21:40:55 crc kubenswrapper[4751]: I0130 21:40:55.825265 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" podStartSLOduration=3.82524884 podStartE2EDuration="3.82524884s" podCreationTimestamp="2026-01-30 21:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:40:55.821872689 +0000 UTC m=+1594.567695348" watchObservedRunningTime="2026-01-30 21:40:55.82524884 +0000 UTC m=+1594.571071489" Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.074225 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.074660 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" containerID="cri-o://2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.075581 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" containerID="cri-o://2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.075649 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" containerID="cri-o://fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.075697 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" containerID="cri-o://fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970" gracePeriod=30 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.799855 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc" exitCode=0 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800206 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16" exitCode=2 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800217 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970" exitCode=0 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.799954 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc"} Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800282 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16"} Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.800297 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970"} Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.803168 4751 generic.go:334] "Generic (PLEG): container finished" podID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" exitCode=143 Jan 30 21:40:56 crc kubenswrapper[4751]: I0130 21:40:56.803276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerDied","Data":"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe"} Jan 30 21:40:57 crc kubenswrapper[4751]: I0130 21:40:57.160661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:40:57 crc kubenswrapper[4751]: I0130 21:40:57.445422 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:57 crc kubenswrapper[4751]: I0130 21:40:57.445474 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.544853 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642549 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642756 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.642799 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") pod \"e039239f-9678-4ac5-bbd9-31120a7e569a\" (UID: \"e039239f-9678-4ac5-bbd9-31120a7e569a\") " Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.643101 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs" (OuterVolumeSpecName: "logs") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.643593 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e039239f-9678-4ac5-bbd9-31120a7e569a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.651726 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh" (OuterVolumeSpecName: "kube-api-access-ndzzh") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "kube-api-access-ndzzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.703026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.733494 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data" (OuterVolumeSpecName: "config-data") pod "e039239f-9678-4ac5-bbd9-31120a7e569a" (UID: "e039239f-9678-4ac5-bbd9-31120a7e569a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.745638 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzzh\" (UniqueName: \"kubernetes.io/projected/e039239f-9678-4ac5-bbd9-31120a7e569a-kube-api-access-ndzzh\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.745677 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.745691 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e039239f-9678-4ac5-bbd9-31120a7e569a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832567 4751 generic.go:334] "Generic (PLEG): container finished" podID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" exitCode=0 Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerDied","Data":"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8"} Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832636 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e039239f-9678-4ac5-bbd9-31120a7e569a","Type":"ContainerDied","Data":"6f2d3e4731c7bd546c8fba7d19f7cf9cd29af48c0d46d6409c89f580aefa0bd4"} Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832637 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.832655 4751 scope.go:117] "RemoveContainer" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.865959 4751 scope.go:117] "RemoveContainer" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.868545 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.889720 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.902125 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.902663 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.902685 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.902719 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.902728 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.903001 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-log" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.903031 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" containerName="nova-api-api" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.904951 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.908669 4751 scope.go:117] "RemoveContainer" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.908843 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.908918 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.909073 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8\": container with ID starting with 512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8 not found: ID does not exist" containerID="512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909100 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8"} err="failed to get container status \"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8\": rpc error: code = NotFound desc = could not find container \"512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8\": container with ID starting with 512781e56c77850103d27194e2aa5d14e00bd229876195f8a49389ffbb66fbb8 not found: ID does not exist" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909120 4751 scope.go:117] "RemoveContainer" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909267 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 21:40:59 crc kubenswrapper[4751]: E0130 21:40:59.909420 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe\": container with ID starting with 8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe not found: ID does not exist" containerID="8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.909459 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe"} err="failed to get container status \"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe\": rpc error: code = NotFound desc = could not find container \"8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe\": container with ID starting with 8ad21c2ddc7303115b4968bbab81557f089623a21c2511f765a86b5c7171eebe not found: ID does not exist" Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.929763 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:40:59 crc kubenswrapper[4751]: I0130 21:40:59.991342 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e039239f-9678-4ac5-bbd9-31120a7e569a" path="/var/lib/kubelet/pods/e039239f-9678-4ac5-bbd9-31120a7e569a/volumes" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.051915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052375 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.052528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154762 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.154969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.155402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.162417 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.182138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.182618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.182838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.200716 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"nova-api-0\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.232849 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.823098 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:00 crc kubenswrapper[4751]: I0130 21:41:00.844126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerStarted","Data":"b0dd113a348b24b15d13958c7f00387904d846fc6379d3d8b34c108e756e0709"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.858222 4751 generic.go:334] "Generic (PLEG): container finished" podID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerID="2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360" exitCode=0 Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.858372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.860875 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerStarted","Data":"493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.860901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerStarted","Data":"e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba"} Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.898608 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.898589261 podStartE2EDuration="2.898589261s" podCreationTimestamp="2026-01-30 21:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:01.877034234 +0000 UTC m=+1600.622856873" watchObservedRunningTime="2026-01-30 21:41:01.898589261 +0000 UTC m=+1600.644411910" Jan 30 21:41:01 crc kubenswrapper[4751]: I0130 21:41:01.998395 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122441 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122579 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122717 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.122758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") pod \"62fe5344-a9e0-40c6-9c81-06061248f1f6\" (UID: \"62fe5344-a9e0-40c6-9c81-06061248f1f6\") " Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.124691 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.124620 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.130953 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts" (OuterVolumeSpecName: "scripts") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.131294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf" (OuterVolumeSpecName: "kube-api-access-fg2hf") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "kube-api-access-fg2hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.160876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.164397 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.180777 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226301 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226354 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62fe5344-a9e0-40c6-9c81-06061248f1f6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226370 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226381 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg2hf\" (UniqueName: \"kubernetes.io/projected/62fe5344-a9e0-40c6-9c81-06061248f1f6-kube-api-access-fg2hf\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.226393 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.229232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.246960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data" (OuterVolumeSpecName: "config-data") pod "62fe5344-a9e0-40c6-9c81-06061248f1f6" (UID: "62fe5344-a9e0-40c6-9c81-06061248f1f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.331450 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.331490 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fe5344-a9e0-40c6-9c81-06061248f1f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.444728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.444766 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.933075 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62fe5344-a9e0-40c6-9c81-06061248f1f6","Type":"ContainerDied","Data":"952c0e68caba01e8a19179d8cae039bc4ad5143d5f3d94ca42acc30936e372b0"} Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.933152 4751 scope.go:117] "RemoveContainer" containerID="2694e2dbfd66f237208db55c844cda317cabffa6ea9248c49cdacc371ec12ccc" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.933382 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.951846 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 21:41:02 crc kubenswrapper[4751]: I0130 21:41:02.980285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.014601 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.035290 4751 scope.go:117] "RemoveContainer" containerID="fcda05dd91f891b6b10d97096bd01c5909bc42dc90db535c273e9630d9ad1d16" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.072940 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073448 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073461 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073474 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073482 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073491 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073497 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: E0130 21:41:03.073531 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073539 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073735 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-central-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073753 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="sg-core" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073763 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="proxy-httpd" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.073777 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" containerName="ceilometer-notification-agent" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.075889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.083519 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.087711 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.090940 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.158256 4751 scope.go:117] "RemoveContainer" containerID="fc7b159909c752eb3954d73a3b4fb088f0cc4914ab1f1a66d75adb6cdd4ca970" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164103 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.164496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.168879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.168936 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.233170 4751 scope.go:117] "RemoveContainer" containerID="2ef5340b3986daee0e9ee3b7fa6ae7d3a92b434f52410ab75ef5c07485682360" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.240637 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.254319 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.261776 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.262092 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.303617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.303657 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.303767 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304043 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304066 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.304316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.319800 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.329096 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.330688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.330944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.332277 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.337028 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.345550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.381968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.398633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"ceilometer-0\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.473634 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.474151 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" containerID="cri-o://aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36" gracePeriod=10 Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.474276 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.485977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.486101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.486303 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.486549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.490269 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.490658 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596353 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.596790 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.606719 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.610232 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.624101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.626496 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"nova-cell1-cell-mapping-cxj8k\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.903519 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.949837 4751 generic.go:334] "Generic (PLEG): container finished" podID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerID="aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36" exitCode=0 Jan 30 21:41:03 crc kubenswrapper[4751]: I0130 21:41:03.949926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerDied","Data":"aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.013043 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fe5344-a9e0-40c6-9c81-06061248f1f6" path="/var/lib/kubelet/pods/62fe5344-a9e0-40c6-9c81-06061248f1f6/volumes" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.152629 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216207 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216315 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216415 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.216762 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") pod \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\" (UID: \"f5ccd9fd-19b5-4def-9fec-de483cdc8282\") " Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.222030 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n" (OuterVolumeSpecName: "kube-api-access-4qn7n") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "kube-api-access-4qn7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.308876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config" (OuterVolumeSpecName: "config") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.319165 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qn7n\" (UniqueName: \"kubernetes.io/projected/f5ccd9fd-19b5-4def-9fec-de483cdc8282-kube-api-access-4qn7n\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.319192 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.323232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.338188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.377208 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.396837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.402632 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5ccd9fd-19b5-4def-9fec-de483cdc8282" (UID: "f5ccd9fd-19b5-4def-9fec-de483cdc8282"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421123 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421155 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421168 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.421176 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5ccd9fd-19b5-4def-9fec-de483cdc8282-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.602500 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.964002 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" event={"ID":"f5ccd9fd-19b5-4def-9fec-de483cdc8282","Type":"ContainerDied","Data":"90582d1ee044bf5a553f8b95b8254b85197e43f8fadf0df2ec897950782d7dc8"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.964464 4751 scope.go:117] "RemoveContainer" containerID="aa2893876b0b686f16a08289b6eaf353d9eb4d024f387b3760981d23221d7e36" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.964771 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.976090 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:04 crc kubenswrapper[4751]: E0130 21:41:04.976369 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.977334 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"b29538a0f0729315334798054b08c861d4a2892710b8f4d3c3dc99157c776bfc"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.988305 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerStarted","Data":"d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f"} Jan 30 21:41:04 crc kubenswrapper[4751]: I0130 21:41:04.988333 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerStarted","Data":"2b7070a6d1fa5810130342db56a18fb257d8293b60ab7a27b1597f49cdb9136f"} Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.028240 4751 scope.go:117] "RemoveContainer" containerID="2cef368e1d9de3d2fb099a0412649b6c02ad1c0e0295100cf195bfffa3dcf34f" Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.049469 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.063561 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hpws7"] Jan 30 21:41:05 crc kubenswrapper[4751]: I0130 21:41:05.064751 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-cxj8k" podStartSLOduration=2.06472922 podStartE2EDuration="2.06472922s" podCreationTimestamp="2026-01-30 21:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:05.044513038 +0000 UTC m=+1603.790335687" watchObservedRunningTime="2026-01-30 21:41:05.06472922 +0000 UTC m=+1603.810551879" Jan 30 21:41:06 crc kubenswrapper[4751]: I0130 21:41:06.019071 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" path="/var/lib/kubelet/pods/f5ccd9fd-19b5-4def-9fec-de483cdc8282/volumes" Jan 30 21:41:06 crc kubenswrapper[4751]: I0130 21:41:06.057671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d"} Jan 30 21:41:07 crc kubenswrapper[4751]: I0130 21:41:07.072960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4"} Jan 30 21:41:07 crc kubenswrapper[4751]: I0130 21:41:07.073298 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0"} Jan 30 21:41:08 crc kubenswrapper[4751]: I0130 21:41:08.908551 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-568d7fd7cf-hpws7" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.251:5353: i/o timeout" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.113679 4751 generic.go:334] "Generic (PLEG): container finished" podID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerID="d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f" exitCode=0 Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.113812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerDied","Data":"d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f"} Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.117064 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerStarted","Data":"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c"} Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.117492 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.163107 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.412577037 podStartE2EDuration="8.163079648s" podCreationTimestamp="2026-01-30 21:41:02 +0000 UTC" firstStartedPulling="2026-01-30 21:41:04.376615641 +0000 UTC m=+1603.122438290" lastFinishedPulling="2026-01-30 21:41:09.127118242 +0000 UTC m=+1607.872940901" observedRunningTime="2026-01-30 21:41:10.152814034 +0000 UTC m=+1608.898636703" watchObservedRunningTime="2026-01-30 21:41:10.163079648 +0000 UTC m=+1608.908902347" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.241833 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:10 crc kubenswrapper[4751]: I0130 21:41:10.242243 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.268552 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.268577 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.669204 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.711101 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.711183 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.711263 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.733431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts" (OuterVolumeSpecName: "scripts") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.757959 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data" (OuterVolumeSpecName: "config-data") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.759477 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.822833 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") pod \"97ec060c-3c30-41e4-946c-7fb4584c7e85\" (UID: \"97ec060c-3c30-41e4-946c-7fb4584c7e85\") " Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.824444 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.824468 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.824479 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ec060c-3c30-41e4-946c-7fb4584c7e85-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.837432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk" (OuterVolumeSpecName: "kube-api-access-5vhjk") pod "97ec060c-3c30-41e4-946c-7fb4584c7e85" (UID: "97ec060c-3c30-41e4-946c-7fb4584c7e85"). InnerVolumeSpecName "kube-api-access-5vhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:11 crc kubenswrapper[4751]: I0130 21:41:11.929316 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vhjk\" (UniqueName: \"kubernetes.io/projected/97ec060c-3c30-41e4-946c-7fb4584c7e85-kube-api-access-5vhjk\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.187431 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-cxj8k" Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.190712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-cxj8k" event={"ID":"97ec060c-3c30-41e4-946c-7fb4584c7e85","Type":"ContainerDied","Data":"2b7070a6d1fa5810130342db56a18fb257d8293b60ab7a27b1597f49cdb9136f"} Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.190763 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7070a6d1fa5810130342db56a18fb257d8293b60ab7a27b1597f49cdb9136f" Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.354471 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.354742 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" containerID="cri-o://e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.354797 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" containerID="cri-o://493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.375160 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.375525 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" containerID="cri-o://71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.397683 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.397901 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" containerID="cri-o://d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" gracePeriod=30 Jan 30 21:41:12 crc kubenswrapper[4751]: I0130 21:41:12.398399 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" containerID="cri-o://5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" gracePeriod=30 Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.224021 4751 generic.go:334] "Generic (PLEG): container finished" podID="b3c695f6-b212-4f47-9a88-76996d92772d" containerID="e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba" exitCode=143 Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.224457 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerDied","Data":"e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba"} Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.230474 4751 generic.go:334] "Generic (PLEG): container finished" podID="df87edd8-7be6-4739-b927-7fd4415a1945" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" exitCode=143 Jan 30 21:41:13 crc kubenswrapper[4751]: I0130 21:41:13.230516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerDied","Data":"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.079111 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.239497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240109 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240189 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") pod \"df87edd8-7be6-4739-b927-7fd4415a1945\" (UID: \"df87edd8-7be6-4739-b927-7fd4415a1945\") " Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs" (OuterVolumeSpecName: "logs") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.240979 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df87edd8-7be6-4739-b927-7fd4415a1945-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.248823 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz" (OuterVolumeSpecName: "kube-api-access-5htcz") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "kube-api-access-5htcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269564 4751 generic.go:334] "Generic (PLEG): container finished" podID="df87edd8-7be6-4739-b927-7fd4415a1945" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" exitCode=0 Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerDied","Data":"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df87edd8-7be6-4739-b927-7fd4415a1945","Type":"ContainerDied","Data":"204fbf404d1c8ba52093e260239c68e9a0a2f21f814a66b0d5a430c499419aa4"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269691 4751 scope.go:117] "RemoveContainer" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.269839 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.272286 4751 generic.go:334] "Generic (PLEG): container finished" podID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" exitCode=0 Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.272372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerDied","Data":"71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334"} Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.273140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.277217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data" (OuterVolumeSpecName: "config-data") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.316661 4751 scope.go:117] "RemoveContainer" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.326588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "df87edd8-7be6-4739-b927-7fd4415a1945" (UID: "df87edd8-7be6-4739-b927-7fd4415a1945"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343378 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5htcz\" (UniqueName: \"kubernetes.io/projected/df87edd8-7be6-4739-b927-7fd4415a1945-kube-api-access-5htcz\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343415 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343426 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343438 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df87edd8-7be6-4739-b927-7fd4415a1945-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.343942 4751 scope.go:117] "RemoveContainer" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.344451 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303\": container with ID starting with 5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303 not found: ID does not exist" containerID="5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.344477 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303"} err="failed to get container status \"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303\": rpc error: code = NotFound desc = could not find container \"5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303\": container with ID starting with 5162b0f2ca635c63cf74041334e9871e0f6a2e8419d5aa845abf4d636f131303 not found: ID does not exist" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.344498 4751 scope.go:117] "RemoveContainer" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.344764 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69\": container with ID starting with d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69 not found: ID does not exist" containerID="d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.344786 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69"} err="failed to get container status \"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69\": rpc error: code = NotFound desc = could not find container \"d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69\": container with ID starting with d26f985b16cf81ac9f0e0967393b88d99beb19dfcd3aaa1006bcfeec0fafcc69 not found: ID does not exist" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.619427 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.639274 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666105 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666745 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666767 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666784 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerName="nova-manage" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666793 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerName="nova-manage" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666813 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="init" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666821 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="init" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666843 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666850 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.666870 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.666878 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667237 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-metadata" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667262 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ccd9fd-19b5-4def-9fec-de483cdc8282" containerName="dnsmasq-dns" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667289 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" containerName="nova-metadata-log" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.667311 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" containerName="nova-manage" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.668952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.673093 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.673538 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.679912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755144 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-config-data\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2tl\" (UniqueName: \"kubernetes.io/projected/179951f5-39be-43d7-a2fa-3c6f04555760-kube-api-access-jp2tl\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.755935 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/179951f5-39be-43d7-a2fa-3c6f04555760-logs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.857867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-config-data\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.857960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp2tl\" (UniqueName: \"kubernetes.io/projected/179951f5-39be-43d7-a2fa-3c6f04555760-kube-api-access-jp2tl\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.857981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.858050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/179951f5-39be-43d7-a2fa-3c6f04555760-logs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.858103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.859196 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/179951f5-39be-43d7-a2fa-3c6f04555760-logs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.863053 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.873826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.888226 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp2tl\" (UniqueName: \"kubernetes.io/projected/179951f5-39be-43d7-a2fa-3c6f04555760-kube-api-access-jp2tl\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.898072 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/179951f5-39be-43d7-a2fa-3c6f04555760-config-data\") pod \"nova-metadata-0\" (UID: \"179951f5-39be-43d7-a2fa-3c6f04555760\") " pod="openstack/nova-metadata-0" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.929935 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.930163 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.930368 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.930390 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.975645 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:16 crc kubenswrapper[4751]: E0130 21:41:16.975932 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:16 crc kubenswrapper[4751]: I0130 21:41:16.989357 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.010059 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.117863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.163764 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.163877 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.164110 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.171603 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5" (OuterVolumeSpecName: "kube-api-access-k5sm5") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3"). InnerVolumeSpecName "kube-api-access-k5sm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.204002 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle podName:27bb36a2-bfe5-4dca-a828-ea50cd77e9f3 nodeName:}" failed. No retries permitted until 2026-01-30 21:41:17.703971745 +0000 UTC m=+1616.449794394 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3") : error deleting /var/lib/kubelet/pods/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3/volume-subpaths: remove /var/lib/kubelet/pods/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3/volume-subpaths: no such file or directory Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.207550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data" (OuterVolumeSpecName: "config-data") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266557 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266853 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.266875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") pod \"80a202f4-615a-4f93-86ef-46b6a994dd48\" (UID: \"80a202f4-615a-4f93-86ef-46b6a994dd48\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.267574 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.267593 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5sm5\" (UniqueName: \"kubernetes.io/projected/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-kube-api-access-k5sm5\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.271475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts" (OuterVolumeSpecName: "scripts") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.275788 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr" (OuterVolumeSpecName: "kube-api-access-zgzzr") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "kube-api-access-zgzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.292125 4751 generic.go:334] "Generic (PLEG): container finished" podID="b3c695f6-b212-4f47-9a88-76996d92772d" containerID="493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808" exitCode=0 Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.292416 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerDied","Data":"493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.299484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3","Type":"ContainerDied","Data":"48f5ec0c53f7e04ad3c659a4b9e04d6883529b029a3420700108431ca3b92a48"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.299881 4751 scope.go:117] "RemoveContainer" containerID="71e518ce4b47249b7fe655e27ae7ba7dfbf14678e631713c379de27311773334" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.300069 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308236 4751 generic.go:334] "Generic (PLEG): container finished" podID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" exitCode=137 Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"80a202f4-615a-4f93-86ef-46b6a994dd48","Type":"ContainerDied","Data":"5a4c0dc75f2802cc1bc85d8688e41faab929773b0378a29a4bf4c2cf7cf3db55"} Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.308530 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.331671 4751 scope.go:117] "RemoveContainer" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.355756 4751 scope.go:117] "RemoveContainer" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.370795 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgzzr\" (UniqueName: \"kubernetes.io/projected/80a202f4-615a-4f93-86ef-46b6a994dd48-kube-api-access-zgzzr\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.371000 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.385914 4751 scope.go:117] "RemoveContainer" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.401222 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.430530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data" (OuterVolumeSpecName: "config-data") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.432643 4751 scope.go:117] "RemoveContainer" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.436255 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80a202f4-615a-4f93-86ef-46b6a994dd48" (UID: "80a202f4-615a-4f93-86ef-46b6a994dd48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469060 4751 scope.go:117] "RemoveContainer" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.469563 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598\": container with ID starting with 9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598 not found: ID does not exist" containerID="9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469603 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598"} err="failed to get container status \"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598\": rpc error: code = NotFound desc = could not find container \"9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598\": container with ID starting with 9bf0c04083e50725f96ae8263bbc6b5600a1a880e73c529e01299c59f1bc0598 not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469630 4751 scope.go:117] "RemoveContainer" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.469935 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb\": container with ID starting with f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb not found: ID does not exist" containerID="f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.469985 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb"} err="failed to get container status \"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb\": rpc error: code = NotFound desc = could not find container \"f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb\": container with ID starting with f65dee292f2c23dd9aaa3469805cfd92335ffa750e94e1f3024d9787551b98fb not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470008 4751 scope.go:117] "RemoveContainer" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.470236 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19\": container with ID starting with e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19 not found: ID does not exist" containerID="e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470262 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19"} err="failed to get container status \"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19\": rpc error: code = NotFound desc = could not find container \"e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19\": container with ID starting with e7be36c3343336feecf3065582864c9f299a466b29a188fe7804d8ed92ecbf19 not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470282 4751 scope.go:117] "RemoveContainer" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.470718 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd\": container with ID starting with ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd not found: ID does not exist" containerID="ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.470741 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd"} err="failed to get container status \"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd\": rpc error: code = NotFound desc = could not find container \"ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd\": container with ID starting with ac23d604ebb31b286e737554d5742bb44d19b731f343101921e02b829138a8cd not found: ID does not exist" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.473526 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.473553 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80a202f4-615a-4f93-86ef-46b6a994dd48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575006 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575173 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575260 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575429 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") pod \"b3c695f6-b212-4f47-9a88-76996d92772d\" (UID: \"b3c695f6-b212-4f47-9a88-76996d92772d\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.575505 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs" (OuterVolumeSpecName: "logs") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.576283 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3c695f6-b212-4f47-9a88-76996d92772d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.582019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5" (OuterVolumeSpecName: "kube-api-access-ldhm5") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "kube-api-access-ldhm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.595692 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.611277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.631657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.642627 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data" (OuterVolumeSpecName: "config-data") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.669568 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.683029 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690048 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690082 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhm5\" (UniqueName: \"kubernetes.io/projected/b3c695f6-b212-4f47-9a88-76996d92772d-kube-api-access-ldhm5\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690094 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.690103 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.698155 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.705763 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.705792 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.705831 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.705838 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.705858 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.705864 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735408 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.735600 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735687 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.735754 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735860 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.735915 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" Jan 30 21:41:17 crc kubenswrapper[4751]: E0130 21:41:17.735970 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.736017 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.736875 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-listener" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.736980 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-log" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737050 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-notifier" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737123 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737187 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" containerName="nova-api-api" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737246 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" containerName="aodh-evaluator" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.737312 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" containerName="nova-scheduler-scheduler" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.788649 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.791825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") pod \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\" (UID: \"27bb36a2-bfe5-4dca-a828-ea50cd77e9f3\") " Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793719 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.793978 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794011 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794087 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794274 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794487 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.794562 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.805611 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" (UID: "27bb36a2-bfe5-4dca-a828-ea50cd77e9f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.814927 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.822180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b3c695f6-b212-4f47-9a88-76996d92772d" (UID: "b3c695f6-b212-4f47-9a88-76996d92772d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898644 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898714 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.898916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.899319 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.902723 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3c695f6-b212-4f47-9a88-76996d92772d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.904170 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.905765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.907203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.908721 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.909581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.933646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"aodh-0\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " pod="openstack/aodh-0" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.991559 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a202f4-615a-4f93-86ef-46b6a994dd48" path="/var/lib/kubelet/pods/80a202f4-615a-4f93-86ef-46b6a994dd48/volumes" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.994763 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df87edd8-7be6-4739-b927-7fd4415a1945" path="/var/lib/kubelet/pods/df87edd8-7be6-4739-b927-7fd4415a1945/volumes" Jan 30 21:41:17 crc kubenswrapper[4751]: I0130 21:41:17.997601 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.032712 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.052675 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.076079 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.077977 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.080314 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.084513 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.209667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-config-data\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.209998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlfc5\" (UniqueName: \"kubernetes.io/projected/977b9205-4c23-4ff0-9193-5938e4b87c64-kube-api-access-qlfc5\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.210481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.312669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.312765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-config-data\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.312810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlfc5\" (UniqueName: \"kubernetes.io/projected/977b9205-4c23-4ff0-9193-5938e4b87c64-kube-api-access-qlfc5\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.317068 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.318835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977b9205-4c23-4ff0-9193-5938e4b87c64-config-data\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.353492 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"179951f5-39be-43d7-a2fa-3c6f04555760","Type":"ContainerStarted","Data":"3e95ad146a080153432985e867e8880f69ffc1b6ecc59514cf68490b610fd23b"} Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.353542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"179951f5-39be-43d7-a2fa-3c6f04555760","Type":"ContainerStarted","Data":"995d4040c7bb283e459a1de6d5d00a384ae11a3aeb69f518aafdb3f6073f40ea"} Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.365126 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.365261 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3c695f6-b212-4f47-9a88-76996d92772d","Type":"ContainerDied","Data":"b0dd113a348b24b15d13958c7f00387904d846fc6379d3d8b34c108e756e0709"} Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.365316 4751 scope.go:117] "RemoveContainer" containerID="493a2c726ee0c7312a91490a0bea812358c26401fdc9a767242108a7737a1808" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.367779 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlfc5\" (UniqueName: \"kubernetes.io/projected/977b9205-4c23-4ff0-9193-5938e4b87c64-kube-api-access-qlfc5\") pod \"nova-scheduler-0\" (UID: \"977b9205-4c23-4ff0-9193-5938e4b87c64\") " pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.398259 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.402294 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.417663 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.426863 4751 scope.go:117] "RemoveContainer" containerID="e4635b2c42789dec615eb35af87abe175cead9e8bde37a6b842b0a483841edba" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.442119 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.444111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.445824 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.447986 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.448947 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.464577 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: W0130 21:41:18.514429 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c6b6e10_77a2_49e7_a4eb_25af482bfab8.slice/crio-7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c WatchSource:0}: Error finding container 7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c: Status 404 returned error can't find the container with id 7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.516042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j89\" (UniqueName: \"kubernetes.io/projected/e3c7d82f-3209-44cf-a463-9affaab3de75-kube-api-access-t9j89\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618136 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-config-data\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618235 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d82f-3209-44cf-a463-9affaab3de75-logs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618297 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.618326 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.719735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-config-data\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d82f-3209-44cf-a463-9affaab3de75-logs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720154 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.720250 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9j89\" (UniqueName: \"kubernetes.io/projected/e3c7d82f-3209-44cf-a463-9affaab3de75-kube-api-access-t9j89\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.722084 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3c7d82f-3209-44cf-a463-9affaab3de75-logs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.726141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.728271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.730424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-config-data\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.733863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3c7d82f-3209-44cf-a463-9affaab3de75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.737221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9j89\" (UniqueName: \"kubernetes.io/projected/e3c7d82f-3209-44cf-a463-9affaab3de75-kube-api-access-t9j89\") pod \"nova-api-0\" (UID: \"e3c7d82f-3209-44cf-a463-9affaab3de75\") " pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.774211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 21:41:18 crc kubenswrapper[4751]: I0130 21:41:18.974171 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.273045 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.286282 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.385542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3c7d82f-3209-44cf-a463-9affaab3de75","Type":"ContainerStarted","Data":"e5be611bbcb98e16ae767655b91785e96757242b89cc7b49991fb5b3b7ff221e"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.387203 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.387234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.388564 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977b9205-4c23-4ff0-9193-5938e4b87c64","Type":"ContainerStarted","Data":"592f899a44405cf48606e6fffb3f2c880c353fdad9b14bfb441a0255d24d3ec4"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.388590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977b9205-4c23-4ff0-9193-5938e4b87c64","Type":"ContainerStarted","Data":"d973385925aaf9eb882a3e1b1f56491729d38a0fbc6fa8e07757410b1d762f10"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.396217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"179951f5-39be-43d7-a2fa-3c6f04555760","Type":"ContainerStarted","Data":"694e8e9d8c7eaed6f2fddf06358e6f5b4230b1c32f08acef6003ccbb683541bc"} Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.405785 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.405764865 podStartE2EDuration="1.405764865s" podCreationTimestamp="2026-01-30 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:19.402721943 +0000 UTC m=+1618.148544592" watchObservedRunningTime="2026-01-30 21:41:19.405764865 +0000 UTC m=+1618.151587504" Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.422122 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.422102702 podStartE2EDuration="3.422102702s" podCreationTimestamp="2026-01-30 21:41:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:19.416799931 +0000 UTC m=+1618.162622590" watchObservedRunningTime="2026-01-30 21:41:19.422102702 +0000 UTC m=+1618.167925371" Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.990668 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27bb36a2-bfe5-4dca-a828-ea50cd77e9f3" path="/var/lib/kubelet/pods/27bb36a2-bfe5-4dca-a828-ea50cd77e9f3/volumes" Jan 30 21:41:19 crc kubenswrapper[4751]: I0130 21:41:19.991461 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c695f6-b212-4f47-9a88-76996d92772d" path="/var/lib/kubelet/pods/b3c695f6-b212-4f47-9a88-76996d92772d/volumes" Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.407722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3c7d82f-3209-44cf-a463-9affaab3de75","Type":"ContainerStarted","Data":"f457a67532d4f0db550f8ecb8534d2f14439921dfe2fa3ad9b5dce020449c82e"} Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.408032 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3c7d82f-3209-44cf-a463-9affaab3de75","Type":"ContainerStarted","Data":"2841c17d7bd5007c17762f8b1fb7f7b2e1a03dc730a5a02ed26681c917d526a5"} Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.412837 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd"} Jan 30 21:41:20 crc kubenswrapper[4751]: I0130 21:41:20.436961 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.436941693 podStartE2EDuration="2.436941693s" podCreationTimestamp="2026-01-30 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:41:20.427027477 +0000 UTC m=+1619.172850126" watchObservedRunningTime="2026-01-30 21:41:20.436941693 +0000 UTC m=+1619.182764342" Jan 30 21:41:21 crc kubenswrapper[4751]: I0130 21:41:21.423099 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab"} Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.014672 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.014929 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.434580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerStarted","Data":"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187"} Jan 30 21:41:22 crc kubenswrapper[4751]: I0130 21:41:22.471695 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.738581699 podStartE2EDuration="5.471674709s" podCreationTimestamp="2026-01-30 21:41:17 +0000 UTC" firstStartedPulling="2026-01-30 21:41:18.517580407 +0000 UTC m=+1617.263403056" lastFinishedPulling="2026-01-30 21:41:21.250673407 +0000 UTC m=+1619.996496066" observedRunningTime="2026-01-30 21:41:22.45529594 +0000 UTC m=+1621.201118589" watchObservedRunningTime="2026-01-30 21:41:22.471674709 +0000 UTC m=+1621.217497358" Jan 30 21:41:23 crc kubenswrapper[4751]: I0130 21:41:23.403253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 21:41:27 crc kubenswrapper[4751]: I0130 21:41:27.015382 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:27 crc kubenswrapper[4751]: I0130 21:41:27.016419 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.031629 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="179951f5-39be-43d7-a2fa-3c6f04555760" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.031612 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="179951f5-39be-43d7-a2fa-3c6f04555760" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.403734 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.446469 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.569413 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.774962 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:28 crc kubenswrapper[4751]: I0130 21:41:28.775655 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 21:41:29 crc kubenswrapper[4751]: I0130 21:41:29.787546 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3c7d82f-3209-44cf-a463-9affaab3de75" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:29 crc kubenswrapper[4751]: I0130 21:41:29.787567 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3c7d82f-3209-44cf-a463-9affaab3de75" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.14:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 21:41:30 crc kubenswrapper[4751]: I0130 21:41:30.976502 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:30 crc kubenswrapper[4751]: E0130 21:41:30.977060 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:33 crc kubenswrapper[4751]: I0130 21:41:33.486690 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.019892 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.025723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.029353 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 21:41:37 crc kubenswrapper[4751]: I0130 21:41:37.644126 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.320319 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.320639 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" containerID="cri-o://c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e" gracePeriod=30 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.434269 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.434699 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" containerID="cri-o://20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472" gracePeriod=30 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.689433 4751 generic.go:334] "Generic (PLEG): container finished" podID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerID="c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e" exitCode=2 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.689516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerDied","Data":"c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e"} Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.694870 4751 generic.go:334] "Generic (PLEG): container finished" podID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerID="20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472" exitCode=2 Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.694981 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerDied","Data":"20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472"} Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.788847 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.789850 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.802635 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 21:41:38 crc kubenswrapper[4751]: I0130 21:41:38.804616 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.110777 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.115465 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.238874 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") pod \"67d207d6-2cd8-4679-919b-dedddeebd28d\" (UID: \"67d207d6-2cd8-4679-919b-dedddeebd28d\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.238969 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") pod \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.239107 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") pod \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.239252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") pod \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\" (UID: \"0ea4b0a2-4b62-47b1-b925-f78af9c42125\") " Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.246397 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj" (OuterVolumeSpecName: "kube-api-access-g5zjj") pod "67d207d6-2cd8-4679-919b-dedddeebd28d" (UID: "67d207d6-2cd8-4679-919b-dedddeebd28d"). InnerVolumeSpecName "kube-api-access-g5zjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.248914 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg" (OuterVolumeSpecName: "kube-api-access-lwtvg") pod "0ea4b0a2-4b62-47b1-b925-f78af9c42125" (UID: "0ea4b0a2-4b62-47b1-b925-f78af9c42125"). InnerVolumeSpecName "kube-api-access-lwtvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.280419 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ea4b0a2-4b62-47b1-b925-f78af9c42125" (UID: "0ea4b0a2-4b62-47b1-b925-f78af9c42125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.306488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data" (OuterVolumeSpecName: "config-data") pod "0ea4b0a2-4b62-47b1-b925-f78af9c42125" (UID: "0ea4b0a2-4b62-47b1-b925-f78af9c42125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342656 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwtvg\" (UniqueName: \"kubernetes.io/projected/0ea4b0a2-4b62-47b1-b925-f78af9c42125-kube-api-access-lwtvg\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342690 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342700 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zjj\" (UniqueName: \"kubernetes.io/projected/67d207d6-2cd8-4679-919b-dedddeebd28d-kube-api-access-g5zjj\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.342708 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea4b0a2-4b62-47b1-b925-f78af9c42125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.709026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0ea4b0a2-4b62-47b1-b925-f78af9c42125","Type":"ContainerDied","Data":"de8e457ae0f4068038c3e6dd30bdd6296bb65bd86e565ea48c1e280f1358b506"} Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.709377 4751 scope.go:117] "RemoveContainer" containerID="20dafdd7e367671986fd5ba74e2e896a728dd40248873c547c64ef7943928472" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.709544 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.714994 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"67d207d6-2cd8-4679-919b-dedddeebd28d","Type":"ContainerDied","Data":"3bb2d0d293bcca63ced4a6eec87e280101ac65a5555311aa13f1e064ca31af8e"} Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.715631 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.716106 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.790991 4751 scope.go:117] "RemoveContainer" containerID="c8daadd27b9052e4c383910cfe816522e7df6c5dba304b05d5d1d591c21b393e" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.821408 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.843534 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.876021 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.903288 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.924808 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: E0130 21:41:39.925357 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925378 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" Jan 30 21:41:39 crc kubenswrapper[4751]: E0130 21:41:39.925394 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925400 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925639 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" containerName="mysqld-exporter" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.925657 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" containerName="kube-state-metrics" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.926448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.929109 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.929339 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.929879 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.953354 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.967995 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.970130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.979918 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 30 21:41:39 crc kubenswrapper[4751]: I0130 21:41:39.980224 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.002915 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea4b0a2-4b62-47b1-b925-f78af9c42125" path="/var/lib/kubelet/pods/0ea4b0a2-4b62-47b1-b925-f78af9c42125/volumes" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.003632 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d207d6-2cd8-4679-919b-dedddeebd28d" path="/var/lib/kubelet/pods/67d207d6-2cd8-4679-919b-dedddeebd28d/volumes" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.004384 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059269 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg69\" (UniqueName: \"kubernetes.io/projected/e7f85043-bc84-41e2-9f14-a08f96da06f2-kube-api-access-xbg69\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.059854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-config-data\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.060302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9b7g\" (UniqueName: \"kubernetes.io/projected/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-api-access-m9b7g\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.060713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.060866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.162999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-config-data\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9b7g\" (UniqueName: \"kubernetes.io/projected/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-api-access-m9b7g\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163693 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163838 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.163881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbg69\" (UniqueName: \"kubernetes.io/projected/e7f85043-bc84-41e2-9f14-a08f96da06f2-kube-api-access-xbg69\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.170316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.171017 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.171648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.172310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.176368 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0cb07e-2f77-49e2-931f-c896c3962f9d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.177000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f85043-bc84-41e2-9f14-a08f96da06f2-config-data\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.180769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbg69\" (UniqueName: \"kubernetes.io/projected/e7f85043-bc84-41e2-9f14-a08f96da06f2-kube-api-access-xbg69\") pod \"mysqld-exporter-0\" (UID: \"e7f85043-bc84-41e2-9f14-a08f96da06f2\") " pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.184672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9b7g\" (UniqueName: \"kubernetes.io/projected/dc0cb07e-2f77-49e2-931f-c896c3962f9d-kube-api-access-m9b7g\") pod \"kube-state-metrics-0\" (UID: \"dc0cb07e-2f77-49e2-931f-c896c3962f9d\") " pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.254081 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.298099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.749294 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 21:41:40 crc kubenswrapper[4751]: W0130 21:41:40.865549 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f85043_bc84_41e2_9f14_a08f96da06f2.slice/crio-759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b WatchSource:0}: Error finding container 759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b: Status 404 returned error can't find the container with id 759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b Jan 30 21:41:40 crc kubenswrapper[4751]: I0130 21:41:40.872711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048493 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048799 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" containerID="cri-o://9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048862 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" containerID="cri-o://7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048884 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" containerID="cri-o://42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.048821 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" containerID="cri-o://75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" gracePeriod=30 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.740154 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc0cb07e-2f77-49e2-931f-c896c3962f9d","Type":"ContainerStarted","Data":"730d589bfd7e40310877bbd01906d1e2f9684c870a2bcc74a9c01fec72ae8cc7"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.740656 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.740676 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc0cb07e-2f77-49e2-931f-c896c3962f9d","Type":"ContainerStarted","Data":"cea33b0f971d5c3b96f9278f0a8f5b377432392741280de21b015aa6529c8508"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.743249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e7f85043-bc84-41e2-9f14-a08f96da06f2","Type":"ContainerStarted","Data":"e74df03b6505107d773a0997b31506d9bd629597296e805158507eccbfde63b1"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.743290 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e7f85043-bc84-41e2-9f14-a08f96da06f2","Type":"ContainerStarted","Data":"759a82268f0d9dfc45ff3777bbe4d84bb6156c47d4c33e30f6d0c095929fdc3b"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.745988 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" exitCode=0 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.746019 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" exitCode=2 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.746028 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" exitCode=0 Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.747412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.747440 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.747450 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d"} Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.778070 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.411605013 podStartE2EDuration="2.778047917s" podCreationTimestamp="2026-01-30 21:41:39 +0000 UTC" firstStartedPulling="2026-01-30 21:41:40.75101702 +0000 UTC m=+1639.496839669" lastFinishedPulling="2026-01-30 21:41:41.117459914 +0000 UTC m=+1639.863282573" observedRunningTime="2026-01-30 21:41:41.768604713 +0000 UTC m=+1640.514427372" watchObservedRunningTime="2026-01-30 21:41:41.778047917 +0000 UTC m=+1640.523870566" Jan 30 21:41:41 crc kubenswrapper[4751]: I0130 21:41:41.800998 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.257528416 podStartE2EDuration="2.800972891s" podCreationTimestamp="2026-01-30 21:41:39 +0000 UTC" firstStartedPulling="2026-01-30 21:41:40.870320085 +0000 UTC m=+1639.616142744" lastFinishedPulling="2026-01-30 21:41:41.41376457 +0000 UTC m=+1640.159587219" observedRunningTime="2026-01-30 21:41:41.781983942 +0000 UTC m=+1640.527806601" watchObservedRunningTime="2026-01-30 21:41:41.800972891 +0000 UTC m=+1640.546795540" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.596942 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.678881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.678930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679041 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679086 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679141 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.679200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") pod \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\" (UID: \"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4\") " Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.681276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.681981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.688000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts" (OuterVolumeSpecName: "scripts") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.691829 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv" (OuterVolumeSpecName: "kube-api-access-jrzcv") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "kube-api-access-jrzcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.722802 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773568 4751 generic.go:334] "Generic (PLEG): container finished" podID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" exitCode=0 Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0"} Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4","Type":"ContainerDied","Data":"b29538a0f0729315334798054b08c861d4a2892710b8f4d3c3dc99157c776bfc"} Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773665 4751 scope.go:117] "RemoveContainer" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.773681 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.779409 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782349 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782388 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782403 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrzcv\" (UniqueName: \"kubernetes.io/projected/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-kube-api-access-jrzcv\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782430 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.782440 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.836850 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data" (OuterVolumeSpecName: "config-data") pod "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" (UID: "0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.842200 4751 scope.go:117] "RemoveContainer" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.871206 4751 scope.go:117] "RemoveContainer" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.884657 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.891396 4751 scope.go:117] "RemoveContainer" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.919993 4751 scope.go:117] "RemoveContainer" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.920544 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c\": container with ID starting with 75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c not found: ID does not exist" containerID="75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920583 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c"} err="failed to get container status \"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c\": rpc error: code = NotFound desc = could not find container \"75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c\": container with ID starting with 75005102ca65b3f0e9f410be581c7bc514111a4107f51966cdb89335fa00f97c not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920607 4751 scope.go:117] "RemoveContainer" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.920882 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4\": container with ID starting with 42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4 not found: ID does not exist" containerID="42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920908 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4"} err="failed to get container status \"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4\": rpc error: code = NotFound desc = could not find container \"42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4\": container with ID starting with 42d96192b9f4e57fad5e092242dbde88d5cbd680cc2072903710de3fa91d35c4 not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.920926 4751 scope.go:117] "RemoveContainer" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.921211 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0\": container with ID starting with 7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0 not found: ID does not exist" containerID="7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.921236 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0"} err="failed to get container status \"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0\": rpc error: code = NotFound desc = could not find container \"7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0\": container with ID starting with 7098754f73befc22be64c16c1ee0539387fa4e3d2de6a1eafcbb5688afd2fea0 not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.921251 4751 scope.go:117] "RemoveContainer" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.921600 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d\": container with ID starting with 9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d not found: ID does not exist" containerID="9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.921623 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d"} err="failed to get container status \"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d\": rpc error: code = NotFound desc = could not find container \"9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d\": container with ID starting with 9ddca37abc14295092e304e16f057e9292413e612ba26b83ffc562a352e5010d not found: ID does not exist" Jan 30 21:41:43 crc kubenswrapper[4751]: I0130 21:41:43.975521 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:43 crc kubenswrapper[4751]: E0130 21:41:43.975780 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.166664 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.184493 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.201414 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202023 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202056 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202091 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202099 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202130 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202138 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: E0130 21:41:44.202162 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202169 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202445 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-central-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202461 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="ceilometer-notification-agent" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202471 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="proxy-httpd" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.202498 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" containerName="sg-core" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.205229 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.211639 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.211956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.214169 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.217798 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.294740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.294986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295186 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295440 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295535 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.295711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.397908 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.397973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.398009 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399116 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.399802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.400056 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.404065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.404227 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.405007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.405587 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.411808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.417640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"ceilometer-0\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " pod="openstack/ceilometer-0" Jan 30 21:41:44 crc kubenswrapper[4751]: I0130 21:41:44.556411 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:41:45 crc kubenswrapper[4751]: W0130 21:41:45.067565 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc6664ba_7684_43bb_a51d_9e508b308b3d.slice/crio-f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6 WatchSource:0}: Error finding container f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6: Status 404 returned error can't find the container with id f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6 Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.079536 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.803159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1"} Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.803425 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6"} Jan 30 21:41:45 crc kubenswrapper[4751]: I0130 21:41:45.994051 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4" path="/var/lib/kubelet/pods/0e7e1ccb-4a4c-4b79-a03c-50580e75dfd4/volumes" Jan 30 21:41:46 crc kubenswrapper[4751]: I0130 21:41:46.818311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875"} Jan 30 21:41:47 crc kubenswrapper[4751]: I0130 21:41:47.833259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534"} Jan 30 21:41:49 crc kubenswrapper[4751]: I0130 21:41:49.854118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerStarted","Data":"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725"} Jan 30 21:41:49 crc kubenswrapper[4751]: I0130 21:41:49.855176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:41:49 crc kubenswrapper[4751]: I0130 21:41:49.884415 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.592528319 podStartE2EDuration="5.884393058s" podCreationTimestamp="2026-01-30 21:41:44 +0000 UTC" firstStartedPulling="2026-01-30 21:41:45.070175589 +0000 UTC m=+1643.815998248" lastFinishedPulling="2026-01-30 21:41:49.362040308 +0000 UTC m=+1648.107862987" observedRunningTime="2026-01-30 21:41:49.875188172 +0000 UTC m=+1648.621010821" watchObservedRunningTime="2026-01-30 21:41:49.884393058 +0000 UTC m=+1648.630215727" Jan 30 21:41:50 crc kubenswrapper[4751]: I0130 21:41:50.268066 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 21:41:54 crc kubenswrapper[4751]: I0130 21:41:54.976867 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:41:54 crc kubenswrapper[4751]: E0130 21:41:54.977825 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:09 crc kubenswrapper[4751]: I0130 21:42:09.975758 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:09 crc kubenswrapper[4751]: E0130 21:42:09.976614 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:14 crc kubenswrapper[4751]: I0130 21:42:14.575566 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:42:20 crc kubenswrapper[4751]: I0130 21:42:20.976643 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:20 crc kubenswrapper[4751]: E0130 21:42:20.979164 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:24 crc kubenswrapper[4751]: I0130 21:42:24.978588 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:42:24 crc kubenswrapper[4751]: I0130 21:42:24.989399 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-npwgd"] Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.077014 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.078546 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.116534 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.239496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.239695 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.239741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.341903 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.342016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.342040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.367589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.367853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.370357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"heat-db-sync-xw5xf\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.407178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:42:25 crc kubenswrapper[4751]: I0130 21:42:25.990751 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1051dd3c-5d30-47f1-8162-3a3e9d5ee271" path="/var/lib/kubelet/pods/1051dd3c-5d30-47f1-8162-3a3e9d5ee271/volumes" Jan 30 21:42:26 crc kubenswrapper[4751]: W0130 21:42:26.009788 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b928b3_101e_4649_ae57_9857145062f0.slice/crio-404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6 WatchSource:0}: Error finding container 404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6: Status 404 returned error can't find the container with id 404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6 Jan 30 21:42:26 crc kubenswrapper[4751]: I0130 21:42:26.012304 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:42:26 crc kubenswrapper[4751]: I0130 21:42:26.321502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerStarted","Data":"404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6"} Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.006526 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.608920 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609548 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" containerID="cri-o://ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" gracePeriod=30 Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609705 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" containerID="cri-o://a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" gracePeriod=30 Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609706 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" containerID="cri-o://0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" gracePeriod=30 Jan 30 21:42:27 crc kubenswrapper[4751]: I0130 21:42:27.609820 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" containerID="cri-o://4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" gracePeriod=30 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.129297 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351213 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" exitCode=0 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351753 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" exitCode=2 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351768 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" exitCode=0 Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725"} Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534"} Jan 30 21:42:28 crc kubenswrapper[4751]: I0130 21:42:28.351825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1"} Jan 30 21:42:29 crc kubenswrapper[4751]: I0130 21:42:29.978437 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120045 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120406 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120513 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120550 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120569 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120648 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.120867 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") pod \"dc6664ba-7684-43bb-a51d-9e508b308b3d\" (UID: \"dc6664ba-7684-43bb-a51d-9e508b308b3d\") " Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.124770 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.126644 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.130423 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts" (OuterVolumeSpecName: "scripts") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.158558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4" (OuterVolumeSpecName: "kube-api-access-85tk4") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "kube-api-access-85tk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.212651 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224362 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224402 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224416 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224427 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85tk4\" (UniqueName: \"kubernetes.io/projected/dc6664ba-7684-43bb-a51d-9e508b308b3d-kube-api-access-85tk4\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.224441 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc6664ba-7684-43bb-a51d-9e508b308b3d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.255833 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.295178 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.326654 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.326684 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.369562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data" (OuterVolumeSpecName: "config-data") pod "dc6664ba-7684-43bb-a51d-9e508b308b3d" (UID: "dc6664ba-7684-43bb-a51d-9e508b308b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380818 4751 generic.go:334] "Generic (PLEG): container finished" podID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" exitCode=0 Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875"} Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380914 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc6664ba-7684-43bb-a51d-9e508b308b3d","Type":"ContainerDied","Data":"f0a52aaf12d629a20b40a1b124115c7c8878ba6a296c7b2005aac82e96f89bf6"} Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.380931 4751 scope.go:117] "RemoveContainer" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.381112 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.425257 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.426468 4751 scope.go:117] "RemoveContainer" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.433556 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6664ba-7684-43bb-a51d-9e508b308b3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.481278 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489165 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489707 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489725 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489740 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489745 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489768 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489775 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.489789 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.489798 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490010 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-notification-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490021 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="ceilometer-central-agent" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490035 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="sg-core" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.490057 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" containerName="proxy-httpd" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.500534 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.504956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.505035 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.505134 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.524268 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.549319 4751 scope.go:117] "RemoveContainer" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.593810 4751 scope.go:117] "RemoveContainer" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.635525 4751 scope.go:117] "RemoveContainer" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.635912 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725\": container with ID starting with 4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725 not found: ID does not exist" containerID="4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.635946 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725"} err="failed to get container status \"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725\": rpc error: code = NotFound desc = could not find container \"4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725\": container with ID starting with 4c807eb39138fcbab426b7e6182ee8892e0e25c2dc8b96a85aed38bf80f64725 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.635970 4751 scope.go:117] "RemoveContainer" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.636676 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534\": container with ID starting with 0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534 not found: ID does not exist" containerID="0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636698 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534"} err="failed to get container status \"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534\": rpc error: code = NotFound desc = could not find container \"0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534\": container with ID starting with 0e998353f70d72cf83742d5c19ae2a01ada43522339ce1c3fa0dee4d1c39d534 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636716 4751 scope.go:117] "RemoveContainer" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-config-data\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.636986 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875\": container with ID starting with a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875 not found: ID does not exist" containerID="a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637010 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875"} err="failed to get container status \"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875\": rpc error: code = NotFound desc = could not find container \"a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875\": container with ID starting with a0f726c7cd142e46e1744145e306459683e54f9dc50a1f555f6aa463adc70875 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637028 4751 scope.go:117] "RemoveContainer" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.636998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-log-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-scripts\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637181 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8m7w\" (UniqueName: \"kubernetes.io/projected/c69dc070-7de6-4681-a44b-6e2007a7f671-kube-api-access-x8m7w\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.637412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-run-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: E0130 21:42:30.639385 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1\": container with ID starting with ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1 not found: ID does not exist" containerID="ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.639419 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1"} err="failed to get container status \"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1\": rpc error: code = NotFound desc = could not find container \"ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1\": container with ID starting with ade08d2c4232a23eeb9d69dc022cccce47f3c7ec68732285f62203b01157f6a1 not found: ID does not exist" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.738894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-run-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-config-data\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739298 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-log-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-scripts\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8m7w\" (UniqueName: \"kubernetes.io/projected/c69dc070-7de6-4681-a44b-6e2007a7f671-kube-api-access-x8m7w\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-run-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.739782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c69dc070-7de6-4681-a44b-6e2007a7f671-log-httpd\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.746362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-scripts\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.746380 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.747610 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.747928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-config-data\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.758992 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c69dc070-7de6-4681-a44b-6e2007a7f671-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.778390 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8m7w\" (UniqueName: \"kubernetes.io/projected/c69dc070-7de6-4681-a44b-6e2007a7f671-kube-api-access-x8m7w\") pod \"ceilometer-0\" (UID: \"c69dc070-7de6-4681-a44b-6e2007a7f671\") " pod="openstack/ceilometer-0" Jan 30 21:42:30 crc kubenswrapper[4751]: I0130 21:42:30.830871 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 21:42:31 crc kubenswrapper[4751]: I0130 21:42:31.511008 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 21:42:31 crc kubenswrapper[4751]: I0130 21:42:31.984763 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:31 crc kubenswrapper[4751]: E0130 21:42:31.985091 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:32 crc kubenswrapper[4751]: I0130 21:42:32.018743 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6664ba-7684-43bb-a51d-9e508b308b3d" path="/var/lib/kubelet/pods/dc6664ba-7684-43bb-a51d-9e508b308b3d/volumes" Jan 30 21:42:32 crc kubenswrapper[4751]: I0130 21:42:32.328923 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" containerID="cri-o://8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d" gracePeriod=604795 Jan 30 21:42:32 crc kubenswrapper[4751]: I0130 21:42:32.408611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"d210c137bcb42ac00e7e4b52dbd153bb38b8dbaeac853a4d911624e47f822e61"} Jan 30 21:42:33 crc kubenswrapper[4751]: I0130 21:42:33.110094 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" containerID="cri-o://654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1" gracePeriod=604796 Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.390146 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.429053 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.137:5671: connect: connection refused" Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.514736 4751 generic.go:334] "Generic (PLEG): container finished" podID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerID="654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1" exitCode=0 Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.514820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerDied","Data":"654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1"} Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.517142 4751 generic.go:334] "Generic (PLEG): container finished" podID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerID="8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d" exitCode=0 Jan 30 21:42:40 crc kubenswrapper[4751]: I0130 21:42:40.517205 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerDied","Data":"8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d"} Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.532856 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"192a5913-0c28-4214-9ac0-d37ca2eeb34c","Type":"ContainerDied","Data":"e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821"} Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.533418 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8cf5c49c1669ca82eb54f5065d6c41864e69f1712fef33128cb50eb2e139821" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.619135 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719778 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.719952 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720028 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720109 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.720226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") pod \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\" (UID: \"192a5913-0c28-4214-9ac0-d37ca2eeb34c\") " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.721737 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.722387 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.725220 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.741018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.741552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.744599 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info" (OuterVolumeSpecName: "pod-info") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.753804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r" (OuterVolumeSpecName: "kube-api-access-nlq5r") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "kube-api-access-nlq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.787241 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data" (OuterVolumeSpecName: "config-data") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.798980 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832" (OuterVolumeSpecName: "persistence") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "pvc-e3070f15-4c27-478d-9eeb-e56a8b304832". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825719 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825751 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlq5r\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-kube-api-access-nlq5r\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825762 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192a5913-0c28-4214-9ac0-d37ca2eeb34c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825772 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825804 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") on node \"crc\" " Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825816 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192a5913-0c28-4214-9ac0-d37ca2eeb34c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825824 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825832 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.825870 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.826238 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf" (OuterVolumeSpecName: "server-conf") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.887805 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.887966 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e3070f15-4c27-478d-9eeb-e56a8b304832" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832") on node "crc" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.928922 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:41 crc kubenswrapper[4751]: I0130 21:42:41.928955 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192a5913-0c28-4214-9ac0-d37ca2eeb34c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.055508 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "192a5913-0c28-4214-9ac0-d37ca2eeb34c" (UID: "192a5913-0c28-4214-9ac0-d37ca2eeb34c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.126835 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:42:42 crc kubenswrapper[4751]: E0130 21:42:42.127433 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="setup-container" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.127451 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="setup-container" Jan 30 21:42:42 crc kubenswrapper[4751]: E0130 21:42:42.127463 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.127469 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.127686 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" containerName="rabbitmq" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.128927 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.133900 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.134178 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192a5913-0c28-4214-9ac0-d37ca2eeb34c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.150655 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238286 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238371 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238426 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238489 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.238524 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341076 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341173 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341427 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.341723 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342415 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342842 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.342901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.343242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.367543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"dnsmasq-dns-5b75489c6f-wkszw\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.466599 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.541300 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.581335 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.596528 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.613102 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.615297 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.647144 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750311 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29afad92-51c9-45a8-a6a0-ed64925f91f3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.750655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-config-data\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29afad92-51c9-45a8-a6a0-ed64925f91f3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751657 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.751786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69ztt\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-kube-api-access-69ztt\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-config-data\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29afad92-51c9-45a8-a6a0-ed64925f91f3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853556 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853638 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.853675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69ztt\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-kube-api-access-69ztt\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854140 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854597 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29afad92-51c9-45a8-a6a0-ed64925f91f3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.854908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.855324 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-config-data\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.855762 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29afad92-51c9-45a8-a6a0-ed64925f91f3-server-conf\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.858648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.859391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.859740 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29afad92-51c9-45a8-a6a0-ed64925f91f3-pod-info\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.863932 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.863961 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38819e4ab89b59440f000d1a076c7489b3d13c82621db763cbf8d17a6b6689f4/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.873184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69ztt\" (UniqueName: \"kubernetes.io/projected/29afad92-51c9-45a8-a6a0-ed64925f91f3-kube-api-access-69ztt\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.875681 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29afad92-51c9-45a8-a6a0-ed64925f91f3-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:42 crc kubenswrapper[4751]: I0130 21:42:42.973512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3070f15-4c27-478d-9eeb-e56a8b304832\") pod \"rabbitmq-server-2\" (UID: \"29afad92-51c9-45a8-a6a0-ed64925f91f3\") " pod="openstack/rabbitmq-server-2" Jan 30 21:42:43 crc kubenswrapper[4751]: I0130 21:42:43.246829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 30 21:42:43 crc kubenswrapper[4751]: I0130 21:42:43.990310 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192a5913-0c28-4214-9ac0-d37ca2eeb34c" path="/var/lib/kubelet/pods/192a5913-0c28-4214-9ac0-d37ca2eeb34c/volumes" Jan 30 21:42:46 crc kubenswrapper[4751]: I0130 21:42:46.975907 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:46 crc kubenswrapper[4751]: E0130 21:42:46.976995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.341841 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430838 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.430943 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431078 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431160 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431211 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431269 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.431875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"61d75daf-41cb-4ab5-b849-c98080ca748b\" (UID: \"61d75daf-41cb-4ab5-b849-c98080ca748b\") " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.439060 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info" (OuterVolumeSpecName: "pod-info") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.441052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.441384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.445824 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.446618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.448813 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.449518 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d" (OuterVolumeSpecName: "kube-api-access-8hw2d") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "kube-api-access-8hw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.495500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98" (OuterVolumeSpecName: "persistence") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "pvc-415c3201-a3f6-4e58-8696-79f9797a5e98". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.518352 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data" (OuterVolumeSpecName: "config-data") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.537105 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541376 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541541 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61d75daf-41cb-4ab5-b849-c98080ca748b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541604 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541671 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hw2d\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-kube-api-access-8hw2d\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541764 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") on node \"crc\" " Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541842 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61d75daf-41cb-4ab5-b849-c98080ca748b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541902 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.541962 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.588881 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf" (OuterVolumeSpecName: "server-conf") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.605067 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.605213 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-415c3201-a3f6-4e58-8696-79f9797a5e98" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98") on node "crc" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.641880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "61d75daf-41cb-4ab5-b849-c98080ca748b" (UID: "61d75daf-41cb-4ab5-b849-c98080ca748b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.645643 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.645833 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61d75daf-41cb-4ab5-b849-c98080ca748b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.645920 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61d75daf-41cb-4ab5-b849-c98080ca748b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.650941 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"61d75daf-41cb-4ab5-b849-c98080ca748b","Type":"ContainerDied","Data":"f4f06c01fc35225b23f5f598399e00ef90da1d1a2d96b3cf839a507f64a8e8e3"} Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.651030 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.651141 4751 scope.go:117] "RemoveContainer" containerID="654aa5cd180d3480262a0eb6327c9c516fd2aafbea0de4e5b807e47db7d88dd1" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.732205 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.771360 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.782228 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: E0130 21:42:49.782904 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.782933 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" Jan 30 21:42:49 crc kubenswrapper[4751]: E0130 21:42:49.782950 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="setup-container" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.782956 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="setup-container" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.783193 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" containerName="rabbitmq" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.784761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.786762 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qvp6f" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787145 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787341 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787561 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787674 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787768 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.787890 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.817452 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851366 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa019efa-4067-4bd5-b370-12f6a4e6b856-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.851903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852009 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa019efa-4067-4bd5-b370-12f6a4e6b856-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852369 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852446 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8k5\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-kube-api-access-mj8k5\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.852686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954535 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8k5\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-kube-api-access-mj8k5\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa019efa-4067-4bd5-b370-12f6a4e6b856-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954801 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa019efa-4067-4bd5-b370-12f6a4e6b856-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.954858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.955262 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.956298 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.958592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.959602 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa019efa-4067-4bd5-b370-12f6a4e6b856-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.960869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.960945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa019efa-4067-4bd5-b370-12f6a4e6b856-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa019efa-4067-4bd5-b370-12f6a4e6b856-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961489 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961517 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea75936dffe846fa8fe6e7d04e4555ffbed93863b04fcd828432921ea88ef24a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.961889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.979069 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.981653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8k5\" (UniqueName: \"kubernetes.io/projected/aa019efa-4067-4bd5-b370-12f6a4e6b856-kube-api-access-mj8k5\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:49 crc kubenswrapper[4751]: I0130 21:42:49.999394 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d75daf-41cb-4ab5-b849-c98080ca748b" path="/var/lib/kubelet/pods/61d75daf-41cb-4ab5-b849-c98080ca748b/volumes" Jan 30 21:42:50 crc kubenswrapper[4751]: I0130 21:42:50.029221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-415c3201-a3f6-4e58-8696-79f9797a5e98\") pod \"rabbitmq-cell1-server-0\" (UID: \"aa019efa-4067-4bd5-b370-12f6a4e6b856\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:50 crc kubenswrapper[4751]: I0130 21:42:50.110350 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.875238 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.876620 4751 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.876751 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfv8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-xw5xf_openstack(27b928b3-101e-4649-ae57-9857145062f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:42:52 crc kubenswrapper[4751]: E0130 21:42:52.877946 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-xw5xf" podUID="27b928b3-101e-4649-ae57-9857145062f0" Jan 30 21:42:53 crc kubenswrapper[4751]: I0130 21:42:53.306074 4751 scope.go:117] "RemoveContainer" containerID="cf3b264e8ec141124dc8cea806067e0197228587097f1a72076d1d5e3beee32f" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.379743 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.379834 4751 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.379973 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8h5fdh8hf9h5f6h647h79hffh5c9h546h67dh9h57fhb9h5f5h589h559h56dh9dhf6h78h657h5d5hb6h545h548h64dh64h686h5fdh64h584q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8m7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c69dc070-7de6-4681-a44b-6e2007a7f671): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 21:42:53 crc kubenswrapper[4751]: E0130 21:42:53.709167 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-xw5xf" podUID="27b928b3-101e-4649-ae57-9857145062f0" Jan 30 21:42:53 crc kubenswrapper[4751]: I0130 21:42:53.856941 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:42:53 crc kubenswrapper[4751]: I0130 21:42:53.868892 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 30 21:42:53 crc kubenswrapper[4751]: W0130 21:42:53.874564 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afad92_51c9_45a8_a6a0_ed64925f91f3.slice/crio-8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9 WatchSource:0}: Error finding container 8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9: Status 404 returned error can't find the container with id 8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9 Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.014491 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 21:42:54 crc kubenswrapper[4751]: W0130 21:42:54.018359 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa019efa_4067_4bd5_b370_12f6a4e6b856.slice/crio-e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31 WatchSource:0}: Error finding container e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31: Status 404 returned error can't find the container with id e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31 Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.721514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"f23cbc773929535c267b1b565c65bd190c33497cc90203d37681c600fe7f010d"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.723409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerStarted","Data":"8b3e976bf7ba71f1dec107ee762a8a5db7c6c511e6ae106509a4027d27b060d9"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.724977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerStarted","Data":"e25a35954b0e2f7dc7b2823f65739f2a45ca644dd1ee4859681b189dab2afd31"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.726742 4751 generic.go:334] "Generic (PLEG): container finished" podID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" exitCode=0 Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.726774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerDied","Data":"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68"} Jan 30 21:42:54 crc kubenswrapper[4751]: I0130 21:42:54.726790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerStarted","Data":"f71d247a34202ab7d446d615619ab6a42ca31575f3e6d6363567457b8f0020dc"} Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.741114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"db34539eb93e17c04711ee82d3a70653aa68a76b548521348f6d116019544d4b"} Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.743458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerStarted","Data":"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d"} Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.745246 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:42:55 crc kubenswrapper[4751]: I0130 21:42:55.773786 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" podStartSLOduration=13.77375518 podStartE2EDuration="13.77375518s" podCreationTimestamp="2026-01-30 21:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:42:55.764832069 +0000 UTC m=+1714.510654818" watchObservedRunningTime="2026-01-30 21:42:55.77375518 +0000 UTC m=+1714.519577869" Jan 30 21:42:56 crc kubenswrapper[4751]: I0130 21:42:56.783521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerStarted","Data":"a703bc2df41831a8f47a2fe3701f155f23aa568e85a94cdaaa0863e49f204a11"} Jan 30 21:42:56 crc kubenswrapper[4751]: I0130 21:42:56.801122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerStarted","Data":"b6f44bbe8cb9612ed9ca700c8e80a7500e74cb71a824141436a42322c6f2c1ec"} Jan 30 21:42:57 crc kubenswrapper[4751]: E0130 21:42:57.361610 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="c69dc070-7de6-4681-a44b-6e2007a7f671" Jan 30 21:42:57 crc kubenswrapper[4751]: I0130 21:42:57.817496 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"0fc60174b581299430c4303e26c6f1061519111331cf423fa1f499b255f18b87"} Jan 30 21:42:57 crc kubenswrapper[4751]: I0130 21:42:57.818888 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 21:42:57 crc kubenswrapper[4751]: E0130 21:42:57.821496 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="c69dc070-7de6-4681-a44b-6e2007a7f671" Jan 30 21:42:57 crc kubenswrapper[4751]: I0130 21:42:57.977811 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:42:57 crc kubenswrapper[4751]: E0130 21:42:57.978748 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:42:58 crc kubenswrapper[4751]: E0130 21:42:58.839797 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="c69dc070-7de6-4681-a44b-6e2007a7f671" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.468576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.525182 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.525427 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" containerID="cri-o://594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7" gracePeriod=10 Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.718099 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-7h4pb"] Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.721646 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.744717 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-7h4pb"] Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.802854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-config\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngzwh\" (UniqueName: \"kubernetes.io/projected/1eb1b0d1-2407-440a-826b-b5158aab8be3-kube-api-access-ngzwh\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803862 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.803922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.925819 4751 generic.go:334] "Generic (PLEG): container finished" podID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerID="594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7" exitCode=0 Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.925860 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerDied","Data":"594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7"} Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngzwh\" (UniqueName: \"kubernetes.io/projected/1eb1b0d1-2407-440a-826b-b5158aab8be3-kube-api-access-ngzwh\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926535 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926632 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-config\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.926649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.927708 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.929029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-config\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.929622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.931527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.934274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.934791 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1eb1b0d1-2407-440a-826b-b5158aab8be3-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:02 crc kubenswrapper[4751]: I0130 21:43:02.962914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngzwh\" (UniqueName: \"kubernetes.io/projected/1eb1b0d1-2407-440a-826b-b5158aab8be3-kube-api-access-ngzwh\") pod \"dnsmasq-dns-5d75f767dc-7h4pb\" (UID: \"1eb1b0d1-2407-440a-826b-b5158aab8be3\") " pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.070130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.373059 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558609 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558664 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558712 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.558816 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") pod \"294126cb-98f1-4a1b-84eb-256f24d312ec\" (UID: \"294126cb-98f1-4a1b-84eb-256f24d312ec\") " Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.565743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz" (OuterVolumeSpecName: "kube-api-access-zvszz") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "kube-api-access-zvszz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.641549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.643769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config" (OuterVolumeSpecName: "config") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.645925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661581 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661612 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661622 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvszz\" (UniqueName: \"kubernetes.io/projected/294126cb-98f1-4a1b-84eb-256f24d312ec-kube-api-access-zvszz\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661632 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.661641 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.683212 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "294126cb-98f1-4a1b-84eb-256f24d312ec" (UID: "294126cb-98f1-4a1b-84eb-256f24d312ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.764200 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/294126cb-98f1-4a1b-84eb-256f24d312ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.796765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-7h4pb"] Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.940170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" event={"ID":"1eb1b0d1-2407-440a-826b-b5158aab8be3","Type":"ContainerStarted","Data":"3466fcdc52df7c2089d6d3aa53857d5c96a39ae6774283b089964ca3f45c3a64"} Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.942747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" event={"ID":"294126cb-98f1-4a1b-84eb-256f24d312ec","Type":"ContainerDied","Data":"dee8c983aae4ef6924c6d9d77cb8b52f55a1c21e202b60331cafd82b2208a0d0"} Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.942829 4751 scope.go:117] "RemoveContainer" containerID="594cfabac973f7db3981d05b5c834beff59d6ed05b3a70c8219822cdac4213e7" Jan 30 21:43:03 crc kubenswrapper[4751]: I0130 21:43:03.943303 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.004140 4751 scope.go:117] "RemoveContainer" containerID="6917c598c5eba82a1b463890dd92dd5f9d24bd22527e450c4ff5bef9192d6678" Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.010796 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.025675 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-z9wt9"] Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.957697 4751 generic.go:334] "Generic (PLEG): container finished" podID="1eb1b0d1-2407-440a-826b-b5158aab8be3" containerID="c9d73d17215eb049095438aaa3593690ae9f3794d74be4caa48b5003e376b0ba" exitCode=0 Jan 30 21:43:04 crc kubenswrapper[4751]: I0130 21:43:04.957740 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" event={"ID":"1eb1b0d1-2407-440a-826b-b5158aab8be3","Type":"ContainerDied","Data":"c9d73d17215eb049095438aaa3593690ae9f3794d74be4caa48b5003e376b0ba"} Jan 30 21:43:06 crc kubenswrapper[4751]: I0130 21:43:06.008152 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" path="/var/lib/kubelet/pods/294126cb-98f1-4a1b-84eb-256f24d312ec/volumes" Jan 30 21:43:06 crc kubenswrapper[4751]: I0130 21:43:06.009752 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" event={"ID":"1eb1b0d1-2407-440a-826b-b5158aab8be3","Type":"ContainerStarted","Data":"7b36b4dac611412fc837a5177e36a388bf54508291136f08db1d4dbf270b7d1d"} Jan 30 21:43:06 crc kubenswrapper[4751]: I0130 21:43:06.009791 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:07 crc kubenswrapper[4751]: I0130 21:43:07.019489 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" podStartSLOduration=5.019460075 podStartE2EDuration="5.019460075s" podCreationTimestamp="2026-01-30 21:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:06.014701499 +0000 UTC m=+1724.760524158" watchObservedRunningTime="2026-01-30 21:43:07.019460075 +0000 UTC m=+1725.765282764" Jan 30 21:43:08 crc kubenswrapper[4751]: I0130 21:43:08.030984 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerStarted","Data":"74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952"} Jan 30 21:43:08 crc kubenswrapper[4751]: I0130 21:43:08.066885 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-xw5xf" podStartSLOduration=1.875446451 podStartE2EDuration="43.066863823s" podCreationTimestamp="2026-01-30 21:42:25 +0000 UTC" firstStartedPulling="2026-01-30 21:42:26.011541638 +0000 UTC m=+1684.757364287" lastFinishedPulling="2026-01-30 21:43:07.20295901 +0000 UTC m=+1725.948781659" observedRunningTime="2026-01-30 21:43:08.048488696 +0000 UTC m=+1726.794311395" watchObservedRunningTime="2026-01-30 21:43:08.066863823 +0000 UTC m=+1726.812686482" Jan 30 21:43:08 crc kubenswrapper[4751]: I0130 21:43:08.327560 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84f9ccf-z9wt9" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.7:5353: i/o timeout" Jan 30 21:43:10 crc kubenswrapper[4751]: I0130 21:43:10.068062 4751 generic.go:334] "Generic (PLEG): container finished" podID="27b928b3-101e-4649-ae57-9857145062f0" containerID="74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952" exitCode=0 Jan 30 21:43:10 crc kubenswrapper[4751]: I0130 21:43:10.068148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerDied","Data":"74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952"} Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.590868 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.718933 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") pod \"27b928b3-101e-4649-ae57-9857145062f0\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.719034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") pod \"27b928b3-101e-4649-ae57-9857145062f0\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.719461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") pod \"27b928b3-101e-4649-ae57-9857145062f0\" (UID: \"27b928b3-101e-4649-ae57-9857145062f0\") " Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.725784 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d" (OuterVolumeSpecName: "kube-api-access-kfv8d") pod "27b928b3-101e-4649-ae57-9857145062f0" (UID: "27b928b3-101e-4649-ae57-9857145062f0"). InnerVolumeSpecName "kube-api-access-kfv8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.768276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27b928b3-101e-4649-ae57-9857145062f0" (UID: "27b928b3-101e-4649-ae57-9857145062f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.819005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data" (OuterVolumeSpecName: "config-data") pod "27b928b3-101e-4649-ae57-9857145062f0" (UID: "27b928b3-101e-4649-ae57-9857145062f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.823362 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfv8d\" (UniqueName: \"kubernetes.io/projected/27b928b3-101e-4649-ae57-9857145062f0-kube-api-access-kfv8d\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.823416 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:11 crc kubenswrapper[4751]: I0130 21:43:11.823437 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b928b3-101e-4649-ae57-9857145062f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.020468 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.094947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xw5xf" event={"ID":"27b928b3-101e-4649-ae57-9857145062f0","Type":"ContainerDied","Data":"404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6"} Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.095628 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="404e00df247a8af350a6ca415370390444a1c5afadb4e6e6be032527b135bac6" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.095247 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xw5xf" Jan 30 21:43:12 crc kubenswrapper[4751]: I0130 21:43:12.976570 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:12 crc kubenswrapper[4751]: E0130 21:43:12.977097 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.051580 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7ccc7fc744-trd9b"] Jan 30 21:43:13 crc kubenswrapper[4751]: E0130 21:43:13.052259 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052278 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" Jan 30 21:43:13 crc kubenswrapper[4751]: E0130 21:43:13.052297 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b928b3-101e-4649-ae57-9857145062f0" containerName="heat-db-sync" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052305 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b928b3-101e-4649-ae57-9857145062f0" containerName="heat-db-sync" Jan 30 21:43:13 crc kubenswrapper[4751]: E0130 21:43:13.052318 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="init" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052344 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="init" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052666 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="294126cb-98f1-4a1b-84eb-256f24d312ec" containerName="dnsmasq-dns" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.052687 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b928b3-101e-4649-ae57-9857145062f0" containerName="heat-db-sync" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.053727 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.067930 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7ccc7fc744-trd9b"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.073183 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-7h4pb" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.152211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c69dc070-7de6-4681-a44b-6e2007a7f671","Type":"ContainerStarted","Data":"c982ce18940dc550b209265bbc512e136723991b7c02b5ff8f903d2c79cb1d9d"} Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data-custom\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156267 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dpf\" (UniqueName: \"kubernetes.io/projected/2465732f-6109-4d66-84c4-f08a6a1ac472-kube-api-access-v2dpf\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.156646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-combined-ca-bundle\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.191021 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-68c4b8fdd-wvfwg"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.193020 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.246710 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68c4b8fdd-wvfwg"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-combined-ca-bundle\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data-custom\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260524 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.260712 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dpf\" (UniqueName: \"kubernetes.io/projected/2465732f-6109-4d66-84c4-f08a6a1ac472-kube-api-access-v2dpf\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.270674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.271977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-combined-ca-bundle\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.275268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2465732f-6109-4d66-84c4-f08a6a1ac472-config-data-custom\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.297978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dpf\" (UniqueName: \"kubernetes.io/projected/2465732f-6109-4d66-84c4-f08a6a1ac472-kube-api-access-v2dpf\") pod \"heat-engine-7ccc7fc744-trd9b\" (UID: \"2465732f-6109-4d66-84c4-f08a6a1ac472\") " pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.312804 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.313039 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" containerID="cri-o://9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" gracePeriod=10 Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.324565 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-75666c8dc5-6rmsl"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.326620 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.332493 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.671070304 podStartE2EDuration="43.332476301s" podCreationTimestamp="2026-01-30 21:42:30 +0000 UTC" firstStartedPulling="2026-01-30 21:42:31.51784579 +0000 UTC m=+1690.263668429" lastFinishedPulling="2026-01-30 21:43:12.179251777 +0000 UTC m=+1730.925074426" observedRunningTime="2026-01-30 21:43:13.194407663 +0000 UTC m=+1731.940230312" watchObservedRunningTime="2026-01-30 21:43:13.332476301 +0000 UTC m=+1732.078298950" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-combined-ca-bundle\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362391 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-internal-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362498 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-public-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data-custom\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.362910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.363015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfspw\" (UniqueName: \"kubernetes.io/projected/ce637680-0e89-4089-bbb7-704117a5dcb0-kube-api-access-bfspw\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.364831 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-75666c8dc5-6rmsl"] Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.401498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.466491 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg884\" (UniqueName: \"kubernetes.io/projected/3100f81b-465d-42f8-9bbd-88e0aecbdc56-kube-api-access-vg884\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.467097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-combined-ca-bundle\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.467184 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data-custom\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.467282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468231 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfspw\" (UniqueName: \"kubernetes.io/projected/ce637680-0e89-4089-bbb7-704117a5dcb0-kube-api-access-bfspw\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-public-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data-custom\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-combined-ca-bundle\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468792 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-internal-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468954 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-public-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.468972 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-internal-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.473701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.475267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-internal-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.476128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-config-data-custom\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.477909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-public-tls-certs\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.480025 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce637680-0e89-4089-bbb7-704117a5dcb0-combined-ca-bundle\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.493093 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfspw\" (UniqueName: \"kubernetes.io/projected/ce637680-0e89-4089-bbb7-704117a5dcb0-kube-api-access-bfspw\") pod \"heat-api-68c4b8fdd-wvfwg\" (UID: \"ce637680-0e89-4089-bbb7-704117a5dcb0\") " pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.544028 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-internal-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg884\" (UniqueName: \"kubernetes.io/projected/3100f81b-465d-42f8-9bbd-88e0aecbdc56-kube-api-access-vg884\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-combined-ca-bundle\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-public-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571423 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.571457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data-custom\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.578544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.579121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-public-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.579201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-config-data-custom\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.580501 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-internal-tls-certs\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.580969 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3100f81b-465d-42f8-9bbd-88e0aecbdc56-combined-ca-bundle\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.620097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg884\" (UniqueName: \"kubernetes.io/projected/3100f81b-465d-42f8-9bbd-88e0aecbdc56-kube-api-access-vg884\") pod \"heat-cfnapi-75666c8dc5-6rmsl\" (UID: \"3100f81b-465d-42f8-9bbd-88e0aecbdc56\") " pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:13 crc kubenswrapper[4751]: I0130 21:43:13.751146 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:14 crc kubenswrapper[4751]: W0130 21:43:14.029852 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2465732f_6109_4d66_84c4_f08a6a1ac472.slice/crio-e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad WatchSource:0}: Error finding container e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad: Status 404 returned error can't find the container with id e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.038533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7ccc7fc744-trd9b"] Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.101770 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.176859 4751 generic.go:334] "Generic (PLEG): container finished" podID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" exitCode=0 Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.177186 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.178499 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerDied","Data":"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d"} Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.178567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-wkszw" event={"ID":"56ea54f1-23d8-4e09-b159-bd66a7bb5618","Type":"ContainerDied","Data":"f71d247a34202ab7d446d615619ab6a42ca31575f3e6d6363567457b8f0020dc"} Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.178595 4751 scope.go:117] "RemoveContainer" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.182163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7ccc7fc744-trd9b" event={"ID":"2465732f-6109-4d66-84c4-f08a6a1ac472","Type":"ContainerStarted","Data":"e782eb7df9abe611c23afd540f18ce0fde89c4bc457e46705177a9e1553a84ad"} Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.218751 4751 scope.go:117] "RemoveContainer" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.253154 4751 scope.go:117] "RemoveContainer" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" Jan 30 21:43:14 crc kubenswrapper[4751]: E0130 21:43:14.253612 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d\": container with ID starting with 9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d not found: ID does not exist" containerID="9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.253693 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d"} err="failed to get container status \"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d\": rpc error: code = NotFound desc = could not find container \"9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d\": container with ID starting with 9bfea387009414d29fd4b757ae7280f94fc32d9703600af8a5aea204764e5f2d not found: ID does not exist" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.253738 4751 scope.go:117] "RemoveContainer" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" Jan 30 21:43:14 crc kubenswrapper[4751]: E0130 21:43:14.256735 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68\": container with ID starting with dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68 not found: ID does not exist" containerID="dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.258141 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68"} err="failed to get container status \"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68\": rpc error: code = NotFound desc = could not find container \"dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68\": container with ID starting with dc4c2bdf34d49b061d056abe71a3bb430c604f70769630df83fbe239747a1c68 not found: ID does not exist" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.271553 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68c4b8fdd-wvfwg"] Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302401 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302427 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302462 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302539 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.302790 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") pod \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\" (UID: \"56ea54f1-23d8-4e09-b159-bd66a7bb5618\") " Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.310100 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89" (OuterVolumeSpecName: "kube-api-access-zkx89") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "kube-api-access-zkx89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.376144 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.376171 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config" (OuterVolumeSpecName: "config") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.384965 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.394027 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405908 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405938 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-config\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405947 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405955 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkx89\" (UniqueName: \"kubernetes.io/projected/56ea54f1-23d8-4e09-b159-bd66a7bb5618-kube-api-access-zkx89\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.405965 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.406386 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.411757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56ea54f1-23d8-4e09-b159-bd66a7bb5618" (UID: "56ea54f1-23d8-4e09-b159-bd66a7bb5618"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.459146 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-75666c8dc5-6rmsl"] Jan 30 21:43:14 crc kubenswrapper[4751]: W0130 21:43:14.464503 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3100f81b_465d_42f8_9bbd_88e0aecbdc56.slice/crio-30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d WatchSource:0}: Error finding container 30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d: Status 404 returned error can't find the container with id 30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.511416 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.511447 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ea54f1-23d8-4e09-b159-bd66a7bb5618-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.523834 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:43:14 crc kubenswrapper[4751]: I0130 21:43:14.535694 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-wkszw"] Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.194556 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c4b8fdd-wvfwg" event={"ID":"ce637680-0e89-4089-bbb7-704117a5dcb0","Type":"ContainerStarted","Data":"ad24f26d8777d0ba1cfb276ba9cd17f4029eaa9d380eb07c9d996991575f8570"} Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.199228 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" event={"ID":"3100f81b-465d-42f8-9bbd-88e0aecbdc56","Type":"ContainerStarted","Data":"30520be9b041eff015abd949e64a4295c7efe4ebaff533e7aa05a8a0dcdc677d"} Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.200766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7ccc7fc744-trd9b" event={"ID":"2465732f-6109-4d66-84c4-f08a6a1ac472","Type":"ContainerStarted","Data":"658d9824ceb6afbb917ed8d0673d3d53a1c934b4964073e122686bd5ab6e0145"} Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.200955 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.225851 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7ccc7fc744-trd9b" podStartSLOduration=2.225835206 podStartE2EDuration="2.225835206s" podCreationTimestamp="2026-01-30 21:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:15.218606942 +0000 UTC m=+1733.964429581" watchObservedRunningTime="2026-01-30 21:43:15.225835206 +0000 UTC m=+1733.971657855" Jan 30 21:43:15 crc kubenswrapper[4751]: I0130 21:43:15.994261 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" path="/var/lib/kubelet/pods/56ea54f1-23d8-4e09-b159-bd66a7bb5618/volumes" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.240858 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" event={"ID":"3100f81b-465d-42f8-9bbd-88e0aecbdc56","Type":"ContainerStarted","Data":"8df6ee0b57ace7352f8521c0d1e8d0a4985bc8826b702b26426bdf59bee4a39c"} Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.241511 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.243543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c4b8fdd-wvfwg" event={"ID":"ce637680-0e89-4089-bbb7-704117a5dcb0","Type":"ContainerStarted","Data":"3582a9eb606a7b66235da50ed492f0932575cf3a5f51474ebfad897ddaba2434"} Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.255803 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.275664 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" podStartSLOduration=2.678256541 podStartE2EDuration="4.275644456s" podCreationTimestamp="2026-01-30 21:43:13 +0000 UTC" firstStartedPulling="2026-01-30 21:43:14.469732084 +0000 UTC m=+1733.215554733" lastFinishedPulling="2026-01-30 21:43:16.067119999 +0000 UTC m=+1734.812942648" observedRunningTime="2026-01-30 21:43:17.263522929 +0000 UTC m=+1736.009345618" watchObservedRunningTime="2026-01-30 21:43:17.275644456 +0000 UTC m=+1736.021467105" Jan 30 21:43:17 crc kubenswrapper[4751]: I0130 21:43:17.287227 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-68c4b8fdd-wvfwg" podStartSLOduration=2.482500416 podStartE2EDuration="4.287205848s" podCreationTimestamp="2026-01-30 21:43:13 +0000 UTC" firstStartedPulling="2026-01-30 21:43:14.267237947 +0000 UTC m=+1733.013060596" lastFinishedPulling="2026-01-30 21:43:16.071943369 +0000 UTC m=+1734.817766028" observedRunningTime="2026-01-30 21:43:17.282321337 +0000 UTC m=+1736.028143986" watchObservedRunningTime="2026-01-30 21:43:17.287205848 +0000 UTC m=+1736.033028487" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.076254 4751 scope.go:117] "RemoveContainer" containerID="f5e405ed39cb57c7e634de9365462e74ee99a3051cc26eb21d0da11ce6b70e82" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.131896 4751 scope.go:117] "RemoveContainer" containerID="8ab084e559e8069a5cdd46d2514468a22129fd354769c2604ada982fbc95ae13" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.174719 4751 scope.go:117] "RemoveContainer" containerID="ded685defb3526390eca5f7cb2d53cfb12497b060a9cc1ce297a52cc7244f151" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.228429 4751 scope.go:117] "RemoveContainer" containerID="8694789fa0038f6976a755ccc1f09ff5edec94cba32aab400030d4cae96b540d" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.274741 4751 scope.go:117] "RemoveContainer" containerID="6cd06f2bb56b148e8bf2fd2524c5d527d970ea6c6b7ba394cc56edcda374faf1" Jan 30 21:43:23 crc kubenswrapper[4751]: I0130 21:43:23.307185 4751 scope.go:117] "RemoveContainer" containerID="6a2c138626ec1f6b7d91772998275ab4f054944271024ad8876c0420d7d4bbc9" Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.628088 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-68c4b8fdd-wvfwg" Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.650454 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-75666c8dc5-6rmsl" Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.725425 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.725978 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f4bd4b69-ntk8n" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" containerID="cri-o://f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9" gracePeriod=60 Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.739233 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.739584 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" containerID="cri-o://702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0" gracePeriod=60 Jan 30 21:43:25 crc kubenswrapper[4751]: I0130 21:43:25.976607 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:25 crc kubenswrapper[4751]: E0130 21:43:25.976953 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:28 crc kubenswrapper[4751]: I0130 21:43:28.891440 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6f4bd4b69-ntk8n" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.237:8004/healthcheck\": read tcp 10.217.0.2:44998->10.217.0.237:8004: read: connection reset by peer" Jan 30 21:43:28 crc kubenswrapper[4751]: I0130 21:43:28.921018 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.236:8000/healthcheck\": read tcp 10.217.0.2:39156->10.217.0.236:8000: read: connection reset by peer" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.412849 4751 generic.go:334] "Generic (PLEG): container finished" podID="8a808a38-f939-4b4f-8386-e177712737d6" containerID="702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.412933 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerDied","Data":"702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.414207 4751 generic.go:334] "Generic (PLEG): container finished" podID="29afad92-51c9-45a8-a6a0-ed64925f91f3" containerID="a703bc2df41831a8f47a2fe3701f155f23aa568e85a94cdaaa0863e49f204a11" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.414250 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerDied","Data":"a703bc2df41831a8f47a2fe3701f155f23aa568e85a94cdaaa0863e49f204a11"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.419578 4751 generic.go:334] "Generic (PLEG): container finished" podID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerID="f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.419663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerDied","Data":"f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.436168 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa019efa-4067-4bd5-b370-12f6a4e6b856" containerID="b6f44bbe8cb9612ed9ca700c8e80a7500e74cb71a824141436a42322c6f2c1ec" exitCode=0 Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.436237 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerDied","Data":"b6f44bbe8cb9612ed9ca700c8e80a7500e74cb71a824141436a42322c6f2c1ec"} Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.698992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.706728 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831460 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.831957 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832071 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832234 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832288 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832366 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") pod \"43d36aef-fb14-4701-8931-9aaa96d049a9\" (UID: \"43d36aef-fb14-4701-8931-9aaa96d049a9\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832488 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.832522 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") pod \"8a808a38-f939-4b4f-8386-e177712737d6\" (UID: \"8a808a38-f939-4b4f-8386-e177712737d6\") " Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.838905 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc" (OuterVolumeSpecName: "kube-api-access-wjzpc") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "kube-api-access-wjzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.838953 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.845135 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt" (OuterVolumeSpecName: "kube-api-access-n7dxt") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "kube-api-access-n7dxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.857534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.883513 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.898459 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.925893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.932019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935292 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935317 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjzpc\" (UniqueName: \"kubernetes.io/projected/8a808a38-f939-4b4f-8386-e177712737d6-kube-api-access-wjzpc\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935340 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7dxt\" (UniqueName: \"kubernetes.io/projected/43d36aef-fb14-4701-8931-9aaa96d049a9-kube-api-access-n7dxt\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935349 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935357 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935365 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935373 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.935381 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.962875 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data" (OuterVolumeSpecName: "config-data") pod "43d36aef-fb14-4701-8931-9aaa96d049a9" (UID: "43d36aef-fb14-4701-8931-9aaa96d049a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.965203 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.966801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:29 crc kubenswrapper[4751]: I0130 21:43:29.971569 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data" (OuterVolumeSpecName: "config-data") pod "8a808a38-f939-4b4f-8386-e177712737d6" (UID: "8a808a38-f939-4b4f-8386-e177712737d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.038243 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.038798 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.038870 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d36aef-fb14-4701-8931-9aaa96d049a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.039233 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a808a38-f939-4b4f-8386-e177712737d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.448656 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"aa019efa-4067-4bd5-b370-12f6a4e6b856","Type":"ContainerStarted","Data":"99fb433ba9970268f74cec9b102bd6a7711fb0ffe180c9fc936a1a0fbdf5d326"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.450022 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.452623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" event={"ID":"8a808a38-f939-4b4f-8386-e177712737d6","Type":"ContainerDied","Data":"499d3637c3e03f2b7dc0a86e62ae72f328746856d2c5b4b97226255304ddbec8"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.452754 4751 scope.go:117] "RemoveContainer" containerID="702678d6125a2ef911b38b5fcb8c725d8c871b8257728962b6a494f07ee762d0" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.452967 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d6c877d68-9ktwv" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.457909 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"29afad92-51c9-45a8-a6a0-ed64925f91f3","Type":"ContainerStarted","Data":"827d0f7339fc9e8684aff7263232c5e0fc7867b4f20a549010fe4efd05859871"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.458238 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.464258 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f4bd4b69-ntk8n" event={"ID":"43d36aef-fb14-4701-8931-9aaa96d049a9","Type":"ContainerDied","Data":"c0307f0807d895bc4c4c81ee028a2f34849a32fd2400b791f772ab65d779a108"} Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.464383 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f4bd4b69-ntk8n" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.481769 4751 scope.go:117] "RemoveContainer" containerID="f4a9281bbfdd290c0c4cad13b45f8bae7e6a12cff0d866a3bc02118e3db003a9" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.501559 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.50153796 podStartE2EDuration="41.50153796s" podCreationTimestamp="2026-01-30 21:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:30.482047064 +0000 UTC m=+1749.227869713" watchObservedRunningTime="2026-01-30 21:43:30.50153796 +0000 UTC m=+1749.247360619" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.534338 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.553475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-d6c877d68-9ktwv"] Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.555578 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.555558409 podStartE2EDuration="48.555558409s" podCreationTimestamp="2026-01-30 21:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:43:30.536743481 +0000 UTC m=+1749.282566140" watchObservedRunningTime="2026-01-30 21:43:30.555558409 +0000 UTC m=+1749.301381058" Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.579304 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:43:30 crc kubenswrapper[4751]: I0130 21:43:30.590800 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f4bd4b69-ntk8n"] Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.024762 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" path="/var/lib/kubelet/pods/43d36aef-fb14-4701-8931-9aaa96d049a9/volumes" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.026774 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a808a38-f939-4b4f-8386-e177712737d6" path="/var/lib/kubelet/pods/8a808a38-f939-4b4f-8386-e177712737d6/volumes" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.560952 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w"] Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.561999 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="init" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562020 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="init" Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.562030 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562036 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.562082 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562090 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" Jan 30 21:43:32 crc kubenswrapper[4751]: E0130 21:43:32.562103 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562109 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562308 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a808a38-f939-4b4f-8386-e177712737d6" containerName="heat-cfnapi" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562356 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ea54f1-23d8-4e09-b159-bd66a7bb5618" containerName="dnsmasq-dns" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.562379 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d36aef-fb14-4701-8931-9aaa96d049a9" containerName="heat-api" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.563170 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.565155 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.565582 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.568653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.569381 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.584147 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w"] Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.716868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.716995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.717109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.717245 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819389 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.819636 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.826078 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.828279 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.835073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.837060 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:32 crc kubenswrapper[4751]: I0130 21:43:32.920517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.451082 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7ccc7fc744-trd9b" Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.564017 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.564250 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" containerID="cri-o://11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" gracePeriod=60 Jan 30 21:43:33 crc kubenswrapper[4751]: I0130 21:43:33.886785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w"] Jan 30 21:43:34 crc kubenswrapper[4751]: I0130 21:43:34.513846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerStarted","Data":"a3e1ec6aefe3e881897f0787f3cc0457ad03a27579caa9bb077b0717aa35bb28"} Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.528524 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.530118 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.531476 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:35 crc kubenswrapper[4751]: E0130 21:43:35.531516 4751 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:38 crc kubenswrapper[4751]: I0130 21:43:38.039577 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:38 crc kubenswrapper[4751]: E0130 21:43:38.041140 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:40 crc kubenswrapper[4751]: I0130 21:43:40.114018 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="aa019efa-4067-4bd5-b370-12f6a4e6b856" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.250733 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.362506 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.796316 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.815102 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dmqw2"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.873201 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.876959 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.878891 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.903365 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.907497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.907821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.907959 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.908282 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:43 crc kubenswrapper[4751]: I0130 21:43:43.991939 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da95a3dd-69cf-4a27-af6c-1ac5b262c00a" path="/var/lib/kubelet/pods/da95a3dd-69cf-4a27-af6c-1ac5b262c00a/volumes" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010133 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010261 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.010493 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.018253 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.018907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.021294 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.069197 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"aodh-db-sync-q9ws6\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.202536 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.675453 4751 generic.go:334] "Generic (PLEG): container finished" podID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" exitCode=0 Jan 30 21:43:44 crc kubenswrapper[4751]: I0130 21:43:44.675538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerDied","Data":"11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba"} Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.527884 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.528231 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.528634 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 21:43:45 crc kubenswrapper[4751]: E0130 21:43:45.528686 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:46 crc kubenswrapper[4751]: I0130 21:43:46.970294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.003741 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.003913 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.003957 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.004018 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") pod \"191c5874-d3f0-4a2b-adcf-8ceed228e459\" (UID: \"191c5874-d3f0-4a2b-adcf-8ceed228e459\") " Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.059749 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn" (OuterVolumeSpecName: "kube-api-access-5m6kn") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "kube-api-access-5m6kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.061233 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.069037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.107627 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m6kn\" (UniqueName: \"kubernetes.io/projected/191c5874-d3f0-4a2b-adcf-8ceed228e459-kube-api-access-5m6kn\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.107665 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.107676 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.177499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data" (OuterVolumeSpecName: "config-data") pod "191c5874-d3f0-4a2b-adcf-8ceed228e459" (UID: "191c5874-d3f0-4a2b-adcf-8ceed228e459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.209713 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191c5874-d3f0-4a2b-adcf-8ceed228e459-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:47 crc kubenswrapper[4751]: W0130 21:43:47.229983 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee22e47a_e31f_4d01_8eec_e4d24dbb02ca.slice/crio-4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77 WatchSource:0}: Error finding container 4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77: Status 404 returned error can't find the container with id 4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77 Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.232811 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.723588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" event={"ID":"191c5874-d3f0-4a2b-adcf-8ceed228e459","Type":"ContainerDied","Data":"1364dfb35f78bd1c1c6c4e97299ac2e166c205513eddfbb1858b9264a7b65646"} Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.723999 4751 scope.go:117] "RemoveContainer" containerID="11e57762e4522662e81cc49a8798f8c5da2cf1ea9687b1d95d2124777a58dfba" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.724150 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9fcd4c7f-gcp2z" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.741343 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerStarted","Data":"a3b766c6a4c7e111f675065738bc5eed3b92e29a07eaf7151c0c434f41fa2116"} Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.752646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerStarted","Data":"4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77"} Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.771108 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" podStartSLOduration=3.123412981 podStartE2EDuration="15.771087015s" podCreationTimestamp="2026-01-30 21:43:32 +0000 UTC" firstStartedPulling="2026-01-30 21:43:33.878005827 +0000 UTC m=+1752.623828476" lastFinishedPulling="2026-01-30 21:43:46.525679861 +0000 UTC m=+1765.271502510" observedRunningTime="2026-01-30 21:43:47.759249255 +0000 UTC m=+1766.505071904" watchObservedRunningTime="2026-01-30 21:43:47.771087015 +0000 UTC m=+1766.516909664" Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.810946 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:43:47 crc kubenswrapper[4751]: I0130 21:43:47.832819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-d9fcd4c7f-gcp2z"] Jan 30 21:43:48 crc kubenswrapper[4751]: I0130 21:43:48.032169 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" path="/var/lib/kubelet/pods/191c5874-d3f0-4a2b-adcf-8ceed228e459/volumes" Jan 30 21:43:48 crc kubenswrapper[4751]: I0130 21:43:48.180289 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" containerID="cri-o://c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05" gracePeriod=604796 Jan 30 21:43:50 crc kubenswrapper[4751]: I0130 21:43:50.113570 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 21:43:50 crc kubenswrapper[4751]: I0130 21:43:50.404577 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Jan 30 21:43:52 crc kubenswrapper[4751]: I0130 21:43:52.976000 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:43:52 crc kubenswrapper[4751]: E0130 21:43:52.976996 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.856123 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerID="c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05" exitCode=0 Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.856190 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerDied","Data":"c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05"} Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.860189 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerStarted","Data":"e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023"} Jan 30 21:43:54 crc kubenswrapper[4751]: I0130 21:43:54.884042 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-q9ws6" podStartSLOduration=5.295973705 podStartE2EDuration="11.883989085s" podCreationTimestamp="2026-01-30 21:43:43 +0000 UTC" firstStartedPulling="2026-01-30 21:43:47.232938336 +0000 UTC m=+1765.978760985" lastFinishedPulling="2026-01-30 21:43:53.820953716 +0000 UTC m=+1772.566776365" observedRunningTime="2026-01-30 21:43:54.875202118 +0000 UTC m=+1773.621024767" watchObservedRunningTime="2026-01-30 21:43:54.883989085 +0000 UTC m=+1773.629811734" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.298838 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.415607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.415676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.415769 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416416 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416545 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416624 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416678 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416846 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") pod \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\" (UID: \"2ed6288f-1f28-4189-a452-10ed3fa78c7f\") " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.416880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.417803 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.417822 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.418159 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.432687 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.437471 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info" (OuterVolumeSpecName: "pod-info") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.440612 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.454281 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx" (OuterVolumeSpecName: "kube-api-access-zqvcx") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "kube-api-access-zqvcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.477629 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data" (OuterVolumeSpecName: "config-data") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.478294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77" (OuterVolumeSpecName: "persistence") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "pvc-846ec118-ed9e-4829-80fb-53a6edccba77". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520244 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520317 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") on node \"crc\" " Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520452 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520470 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ed6288f-1f28-4189-a452-10ed3fa78c7f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520483 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ed6288f-1f28-4189-a452-10ed3fa78c7f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520496 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqvcx\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-kube-api-access-zqvcx\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.520511 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.538809 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf" (OuterVolumeSpecName: "server-conf") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.559404 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.559549 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-846ec118-ed9e-4829-80fb-53a6edccba77" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77") on node "crc" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.622733 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ed6288f-1f28-4189-a452-10ed3fa78c7f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.622957 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.631516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2ed6288f-1f28-4189-a452-10ed3fa78c7f" (UID: "2ed6288f-1f28-4189-a452-10ed3fa78c7f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.725020 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ed6288f-1f28-4189-a452-10ed3fa78c7f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.872905 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2ed6288f-1f28-4189-a452-10ed3fa78c7f","Type":"ContainerDied","Data":"14b244ff165ab8225e2f7204427c69fbfcfd61b1331f0eb3d778a03cddbe88d2"} Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.872977 4751 scope.go:117] "RemoveContainer" containerID="c118273bc1b7e17b96ef2802a30e188177f69c364926f8d0532e695e28d4ca05" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.872936 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.906921 4751 scope.go:117] "RemoveContainer" containerID="dc43aef27eee6e5555871ea3e140a0c234f05afe3ded956404826b8a2999ed23" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.933173 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.960234 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975024 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:55 crc kubenswrapper[4751]: E0130 21:43:55.975589 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975602 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:55 crc kubenswrapper[4751]: E0130 21:43:55.975619 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="setup-container" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975625 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="setup-container" Jan 30 21:43:55 crc kubenswrapper[4751]: E0130 21:43:55.975641 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975647 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975881 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" containerName="rabbitmq" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.975896 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="191c5874-d3f0-4a2b-adcf-8ceed228e459" containerName="heat-engine" Jan 30 21:43:55 crc kubenswrapper[4751]: I0130 21:43:55.977291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.025197 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed6288f-1f28-4189-a452-10ed3fa78c7f" path="/var/lib/kubelet/pods/2ed6288f-1f28-4189-a452-10ed3fa78c7f/volumes" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.027385 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kgx\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-kube-api-access-75kgx\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030682 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030756 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030835 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-config-data\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.030918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133685 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-config-data\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133781 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.133995 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kgx\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-kube-api-access-75kgx\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134121 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134144 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134205 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.134690 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.135768 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.136128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-server-conf\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.136238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.137460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-config-data\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139491 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139516 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2001e391e04ee7d0edfbd20e4205f1b60c57288335d512357b8e0f2ce2f191a2/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.139696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.141472 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-pod-info\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.141886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.154586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kgx\" (UniqueName: \"kubernetes.io/projected/279dd57b-8f7d-4730-a9ee-cf124f8c0d52-kube-api-access-75kgx\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.223235 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-846ec118-ed9e-4829-80fb-53a6edccba77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-846ec118-ed9e-4829-80fb-53a6edccba77\") pod \"rabbitmq-server-1\" (UID: \"279dd57b-8f7d-4730-a9ee-cf124f8c0d52\") " pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.329535 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 30 21:43:56 crc kubenswrapper[4751]: I0130 21:43:56.931467 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 30 21:43:57 crc kubenswrapper[4751]: I0130 21:43:57.912024 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerID="e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023" exitCode=0 Jan 30 21:43:57 crc kubenswrapper[4751]: I0130 21:43:57.912448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerDied","Data":"e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023"} Jan 30 21:43:57 crc kubenswrapper[4751]: I0130 21:43:57.915180 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerStarted","Data":"f0e17415c02fe28ff4dbf786bebc3e608f6469c1274a4f23432aca898f8b98b7"} Jan 30 21:43:58 crc kubenswrapper[4751]: I0130 21:43:58.926555 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerStarted","Data":"0e7b5befd33a8603a2fbcf0bd4a03072a19d26c3f4e7aad7020c6d3a05574310"} Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.323335 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447636 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447759 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.447850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") pod \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\" (UID: \"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca\") " Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.454290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts" (OuterVolumeSpecName: "scripts") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.455781 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7" (OuterVolumeSpecName: "kube-api-access-45dc7") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "kube-api-access-45dc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.496525 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data" (OuterVolumeSpecName: "config-data") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.505654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" (UID: "ee22e47a-e31f-4d01-8eec-e4d24dbb02ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551886 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45dc7\" (UniqueName: \"kubernetes.io/projected/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-kube-api-access-45dc7\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551945 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551958 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.551969 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.941592 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q9ws6" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.941593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q9ws6" event={"ID":"ee22e47a-e31f-4d01-8eec-e4d24dbb02ca","Type":"ContainerDied","Data":"4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77"} Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.943447 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e16bcad8b0611f5c448e56540c19d9dd39736dcd1a42341bea3573a92a46e77" Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.943706 4751 generic.go:334] "Generic (PLEG): container finished" podID="37b91419-687f-4907-888d-9344d1e8602a" containerID="a3b766c6a4c7e111f675065738bc5eed3b92e29a07eaf7151c0c434f41fa2116" exitCode=0 Jan 30 21:43:59 crc kubenswrapper[4751]: I0130 21:43:59.943759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerDied","Data":"a3b766c6a4c7e111f675065738bc5eed3b92e29a07eaf7151c0c434f41fa2116"} Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.420614 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421229 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" containerID="cri-o://c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421300 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" containerID="cri-o://da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421439 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" containerID="cri-o://a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.421779 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" containerID="cri-o://41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" gracePeriod=30 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.956489 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" exitCode=0 Jan 30 21:44:00 crc kubenswrapper[4751]: I0130 21:44:00.956569 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9"} Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.513442 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598899 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.598968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") pod \"37b91419-687f-4907-888d-9344d1e8602a\" (UID: \"37b91419-687f-4907-888d-9344d1e8602a\") " Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.603801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.618916 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs" (OuterVolumeSpecName: "kube-api-access-5n6xs") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "kube-api-access-5n6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.635122 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.638167 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory" (OuterVolumeSpecName: "inventory") pod "37b91419-687f-4907-888d-9344d1e8602a" (UID: "37b91419-687f-4907-888d-9344d1e8602a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701730 4751 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701770 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701780 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6xs\" (UniqueName: \"kubernetes.io/projected/37b91419-687f-4907-888d-9344d1e8602a-kube-api-access-5n6xs\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.701790 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37b91419-687f-4907-888d-9344d1e8602a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.971062 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" exitCode=0 Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.971141 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd"} Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.973116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" event={"ID":"37b91419-687f-4907-888d-9344d1e8602a","Type":"ContainerDied","Data":"a3e1ec6aefe3e881897f0787f3cc0457ad03a27579caa9bb077b0717aa35bb28"} Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.973155 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e1ec6aefe3e881897f0787f3cc0457ad03a27579caa9bb077b0717aa35bb28" Jan 30 21:44:01 crc kubenswrapper[4751]: I0130 21:44:01.973134 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.048778 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp"] Jan 30 21:44:02 crc kubenswrapper[4751]: E0130 21:44:02.049516 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b91419-687f-4907-888d-9344d1e8602a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049539 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b91419-687f-4907-888d-9344d1e8602a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: E0130 21:44:02.049602 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerName="aodh-db-sync" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049611 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerName="aodh-db-sync" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049883 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b91419-687f-4907-888d-9344d1e8602a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.049920 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" containerName="aodh-db-sync" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.050972 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.053543 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.053769 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.054052 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.055181 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.059836 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp"] Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.213823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.214244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.214548 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.317351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.317690 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.317895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.321007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.321980 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.341621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2bbbp\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.377758 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:02 crc kubenswrapper[4751]: I0130 21:44:02.967112 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp"] Jan 30 21:44:03 crc kubenswrapper[4751]: I0130 21:44:02.999475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerStarted","Data":"5328f5a63580c9f7ec213c84186e982dd1d8995e50601662fa82a7d7034722f7"} Jan 30 21:44:04 crc kubenswrapper[4751]: I0130 21:44:04.018173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerStarted","Data":"3fb3743686fabe731a3892d66238c8c6b9475df17820dd33a4c69d432514da95"} Jan 30 21:44:04 crc kubenswrapper[4751]: I0130 21:44:04.076017 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" podStartSLOduration=1.669303446 podStartE2EDuration="2.075997556s" podCreationTimestamp="2026-01-30 21:44:02 +0000 UTC" firstStartedPulling="2026-01-30 21:44:02.961039415 +0000 UTC m=+1781.706862074" lastFinishedPulling="2026-01-30 21:44:03.367733535 +0000 UTC m=+1782.113556184" observedRunningTime="2026-01-30 21:44:04.059426369 +0000 UTC m=+1782.805249008" watchObservedRunningTime="2026-01-30 21:44:04.075997556 +0000 UTC m=+1782.821820205" Jan 30 21:44:06 crc kubenswrapper[4751]: I0130 21:44:06.051021 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" exitCode=0 Jan 30 21:44:06 crc kubenswrapper[4751]: I0130 21:44:06.051084 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab"} Jan 30 21:44:07 crc kubenswrapper[4751]: I0130 21:44:07.073026 4751 generic.go:334] "Generic (PLEG): container finished" podID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerID="3fb3743686fabe731a3892d66238c8c6b9475df17820dd33a4c69d432514da95" exitCode=0 Jan 30 21:44:07 crc kubenswrapper[4751]: I0130 21:44:07.073278 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerDied","Data":"3fb3743686fabe731a3892d66238c8c6b9475df17820dd33a4c69d432514da95"} Jan 30 21:44:07 crc kubenswrapper[4751]: I0130 21:44:07.976358 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:44:07 crc kubenswrapper[4751]: E0130 21:44:07.976732 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.686755 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.793079 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") pod \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.793145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") pod \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.793240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") pod \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\" (UID: \"a4b9ecbd-4cf2-4554-b209-d7a421499f08\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.819678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn" (OuterVolumeSpecName: "kube-api-access-2vhpn") pod "a4b9ecbd-4cf2-4554-b209-d7a421499f08" (UID: "a4b9ecbd-4cf2-4554-b209-d7a421499f08"). InnerVolumeSpecName "kube-api-access-2vhpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.828117 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory" (OuterVolumeSpecName: "inventory") pod "a4b9ecbd-4cf2-4554-b209-d7a421499f08" (UID: "a4b9ecbd-4cf2-4554-b209-d7a421499f08"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.839277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4b9ecbd-4cf2-4554-b209-d7a421499f08" (UID: "a4b9ecbd-4cf2-4554-b209-d7a421499f08"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.896575 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.896607 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4b9ecbd-4cf2-4554-b209-d7a421499f08-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.896619 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vhpn\" (UniqueName: \"kubernetes.io/projected/a4b9ecbd-4cf2-4554-b209-d7a421499f08-kube-api-access-2vhpn\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.921031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999256 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999308 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:08 crc kubenswrapper[4751]: I0130 21:44:08.999413 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:08.999708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:08.999748 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") pod \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\" (UID: \"2c6b6e10-77a2-49e7-a4eb-25af482bfab8\") " Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.005561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk" (OuterVolumeSpecName: "kube-api-access-6cgnk") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "kube-api-access-6cgnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.005835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts" (OuterVolumeSpecName: "scripts") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.077699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.094628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.100483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" event={"ID":"a4b9ecbd-4cf2-4554-b209-d7a421499f08","Type":"ContainerDied","Data":"5328f5a63580c9f7ec213c84186e982dd1d8995e50601662fa82a7d7034722f7"} Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.100519 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5328f5a63580c9f7ec213c84186e982dd1d8995e50601662fa82a7d7034722f7" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.100575 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2bbbp" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.108963 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.109005 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.109020 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgnk\" (UniqueName: \"kubernetes.io/projected/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-kube-api-access-6cgnk\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.109036 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119048 4751 generic.go:334] "Generic (PLEG): container finished" podID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" exitCode=0 Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119094 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187"} Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c6b6e10-77a2-49e7-a4eb-25af482bfab8","Type":"ContainerDied","Data":"7283a35e10e58cb6fd870643eadfa452a5d15d1f89aa7955246ef678a98a324c"} Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119136 4751 scope.go:117] "RemoveContainer" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.119311 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.154374 4751 scope.go:117] "RemoveContainer" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.154379 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.156526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data" (OuterVolumeSpecName: "config-data") pod "2c6b6e10-77a2-49e7-a4eb-25af482bfab8" (UID: "2c6b6e10-77a2-49e7-a4eb-25af482bfab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.183378 4751 scope.go:117] "RemoveContainer" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199069 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl"] Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199592 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199610 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199630 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199636 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199676 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199685 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199705 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199713 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.199725 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199731 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199930 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-notifier" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199948 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-api" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199958 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-listener" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199968 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" containerName="aodh-evaluator" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.199997 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b9ecbd-4cf2-4554-b209-d7a421499f08" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.200944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.209504 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210224 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210480 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210615 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.210981 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.211001 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6b6e10-77a2-49e7-a4eb-25af482bfab8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.211250 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.216009 4751 scope.go:117] "RemoveContainer" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.241518 4751 scope.go:117] "RemoveContainer" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.241894 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187\": container with ID starting with 41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187 not found: ID does not exist" containerID="41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.241930 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187"} err="failed to get container status \"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187\": rpc error: code = NotFound desc = could not find container \"41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187\": container with ID starting with 41971d5f461004d0df485ffcf9187243aae3bb043417ff8edc3c83dd2dec4187 not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.241955 4751 scope.go:117] "RemoveContainer" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.242318 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab\": container with ID starting with da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab not found: ID does not exist" containerID="da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242354 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab"} err="failed to get container status \"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab\": rpc error: code = NotFound desc = could not find container \"da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab\": container with ID starting with da16b64f8e713580f787cbd2e4719bba2e477f964b675f8a506b95c51441b6ab not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242368 4751 scope.go:117] "RemoveContainer" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.242586 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd\": container with ID starting with a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd not found: ID does not exist" containerID="a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242608 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd"} err="failed to get container status \"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd\": rpc error: code = NotFound desc = could not find container \"a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd\": container with ID starting with a79387816d9bbf1412e00ab664a6adb7838bd04ef1275531e7ddfc43ff77d9fd not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242621 4751 scope.go:117] "RemoveContainer" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" Jan 30 21:44:09 crc kubenswrapper[4751]: E0130 21:44:09.242883 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9\": container with ID starting with c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9 not found: ID does not exist" containerID="c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.242898 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9"} err="failed to get container status \"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9\": rpc error: code = NotFound desc = could not find container \"c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9\": container with ID starting with c5cd4a1a71248838ed9e49a494539bb4ff200313cd7269c902fef1ca59c59df9 not found: ID does not exist" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.313462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.313526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.313883 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.314072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.416579 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.419910 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.421275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.422658 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.437498 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.464944 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.483294 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.496711 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.500423 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.505724 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-k9tjh" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.505956 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.506125 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.509818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.510152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.511842 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.526022 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.619970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-config-data\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-public-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620202 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5z8\" (UniqueName: \"kubernetes.io/projected/0c9eccf2-9252-4f35-9aff-56f0e15102a1-kube-api-access-qt5z8\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-scripts\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.620611 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724004 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724449 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-config-data\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724480 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724524 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-public-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5z8\" (UniqueName: \"kubernetes.io/projected/0c9eccf2-9252-4f35-9aff-56f0e15102a1-kube-api-access-qt5z8\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.724603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-scripts\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.730886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-scripts\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.731014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.731621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-config-data\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.732877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-internal-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.745853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5z8\" (UniqueName: \"kubernetes.io/projected/0c9eccf2-9252-4f35-9aff-56f0e15102a1-kube-api-access-qt5z8\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.751518 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c9eccf2-9252-4f35-9aff-56f0e15102a1-public-tls-certs\") pod \"aodh-0\" (UID: \"0c9eccf2-9252-4f35-9aff-56f0e15102a1\") " pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.969661 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 21:44:09 crc kubenswrapper[4751]: I0130 21:44:09.988418 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6b6e10-77a2-49e7-a4eb-25af482bfab8" path="/var/lib/kubelet/pods/2c6b6e10-77a2-49e7-a4eb-25af482bfab8/volumes" Jan 30 21:44:10 crc kubenswrapper[4751]: I0130 21:44:10.174099 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl"] Jan 30 21:44:10 crc kubenswrapper[4751]: W0130 21:44:10.563442 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c9eccf2_9252_4f35_9aff_56f0e15102a1.slice/crio-e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02 WatchSource:0}: Error finding container e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02: Status 404 returned error can't find the container with id e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02 Jan 30 21:44:10 crc kubenswrapper[4751]: I0130 21:44:10.571591 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.185932 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerStarted","Data":"5fb425b25c8902fe60e5dcd58df1f879542305f303c7a43c344cbd78332f0ba4"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.186233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerStarted","Data":"51f8374b96e74508b8ea161ccefb7ec2d95c6112bacdf4605ef5155ad9ff2a2e"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.189255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"428802cdcdc1078d2fc15c696d718c7a1621a6f2124b81b408d19f3109500e67"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.189282 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"e4e3ec36fc504c99439e3712fb47d464abb15c2638d8103de754c0765bb55f02"} Jan 30 21:44:11 crc kubenswrapper[4751]: I0130 21:44:11.240122 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" podStartSLOduration=1.8336995680000001 podStartE2EDuration="2.239966546s" podCreationTimestamp="2026-01-30 21:44:09 +0000 UTC" firstStartedPulling="2026-01-30 21:44:10.171168661 +0000 UTC m=+1788.916991310" lastFinishedPulling="2026-01-30 21:44:10.577435639 +0000 UTC m=+1789.323258288" observedRunningTime="2026-01-30 21:44:11.228866167 +0000 UTC m=+1789.974688816" watchObservedRunningTime="2026-01-30 21:44:11.239966546 +0000 UTC m=+1789.985789185" Jan 30 21:44:13 crc kubenswrapper[4751]: I0130 21:44:13.222651 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"d8eaf98b1b6bb0b06d3301fd07ef309c4552c2d25f0cebdc4f98ccc76a38ceb1"} Jan 30 21:44:14 crc kubenswrapper[4751]: I0130 21:44:14.235253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"72b8e80ad3b6064951d95698af079dbf14fe74e8b3f9e58ebb2db863341fa90a"} Jan 30 21:44:15 crc kubenswrapper[4751]: I0130 21:44:15.249127 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c9eccf2-9252-4f35-9aff-56f0e15102a1","Type":"ContainerStarted","Data":"965553b39109cd9b09e2aa0be79059e6be3054cd57f0984afa3f7550e512cf34"} Jan 30 21:44:15 crc kubenswrapper[4751]: I0130 21:44:15.311591 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.508449134 podStartE2EDuration="6.311572159s" podCreationTimestamp="2026-01-30 21:44:09 +0000 UTC" firstStartedPulling="2026-01-30 21:44:10.577722577 +0000 UTC m=+1789.323545226" lastFinishedPulling="2026-01-30 21:44:14.380845602 +0000 UTC m=+1793.126668251" observedRunningTime="2026-01-30 21:44:15.288033754 +0000 UTC m=+1794.033856403" watchObservedRunningTime="2026-01-30 21:44:15.311572159 +0000 UTC m=+1794.057394808" Jan 30 21:44:19 crc kubenswrapper[4751]: I0130 21:44:19.976449 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:44:19 crc kubenswrapper[4751]: E0130 21:44:19.977877 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:44:23 crc kubenswrapper[4751]: I0130 21:44:23.615204 4751 scope.go:117] "RemoveContainer" containerID="7254994e57fe71a7702af83ba12ef3a837f896f1d6e6e6a7dbba9ca54cdfc1ad" Jan 30 21:44:31 crc kubenswrapper[4751]: I0130 21:44:31.435177 4751 generic.go:334] "Generic (PLEG): container finished" podID="279dd57b-8f7d-4730-a9ee-cf124f8c0d52" containerID="0e7b5befd33a8603a2fbcf0bd4a03072a19d26c3f4e7aad7020c6d3a05574310" exitCode=0 Jan 30 21:44:31 crc kubenswrapper[4751]: I0130 21:44:31.435288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerDied","Data":"0e7b5befd33a8603a2fbcf0bd4a03072a19d26c3f4e7aad7020c6d3a05574310"} Jan 30 21:44:32 crc kubenswrapper[4751]: I0130 21:44:32.448674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"279dd57b-8f7d-4730-a9ee-cf124f8c0d52","Type":"ContainerStarted","Data":"e2065c8fac7270fc3baadedcaef0bc9456870e9d783937e9b9d6212f3ef535cc"} Jan 30 21:44:32 crc kubenswrapper[4751]: I0130 21:44:32.449463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 30 21:44:32 crc kubenswrapper[4751]: I0130 21:44:32.479867 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.47984932 podStartE2EDuration="37.47984932s" podCreationTimestamp="2026-01-30 21:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:44:32.471807663 +0000 UTC m=+1811.217630332" watchObservedRunningTime="2026-01-30 21:44:32.47984932 +0000 UTC m=+1811.225671969" Jan 30 21:44:34 crc kubenswrapper[4751]: I0130 21:44:34.976869 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:44:35 crc kubenswrapper[4751]: I0130 21:44:35.490754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0"} Jan 30 21:44:46 crc kubenswrapper[4751]: I0130 21:44:46.334600 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 30 21:44:46 crc kubenswrapper[4751]: I0130 21:44:46.413670 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:50 crc kubenswrapper[4751]: I0130 21:44:50.698651 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" containerID="cri-o://fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" gracePeriod=604796 Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.381782 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522043 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522662 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522766 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.522823 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523077 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523306 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523438 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") pod \"f18b5d57-5b05-4ef0-bae3-68938e094510\" (UID: \"f18b5d57-5b05-4ef0-bae3-68938e094510\") " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.523919 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.524739 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.524756 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.524771 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.531390 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94" (OuterVolumeSpecName: "kube-api-access-8rt94") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "kube-api-access-8rt94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.531674 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.533289 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.541637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info" (OuterVolumeSpecName: "pod-info") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.550007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef" (OuterVolumeSpecName: "persistence") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.558019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data" (OuterVolumeSpecName: "config-data") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.590202 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf" (OuterVolumeSpecName: "server-conf") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629162 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629204 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f18b5d57-5b05-4ef0-bae3-68938e094510-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629219 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rt94\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-kube-api-access-8rt94\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629259 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") on node \"crc\" " Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629274 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629289 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f18b5d57-5b05-4ef0-bae3-68938e094510-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.629302 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f18b5d57-5b05-4ef0-bae3-68938e094510-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.687011 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.687231 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef") on node "crc" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.732676 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f18b5d57-5b05-4ef0-bae3-68938e094510" (UID: "f18b5d57-5b05-4ef0-bae3-68938e094510"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.733469 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.733499 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f18b5d57-5b05-4ef0-bae3-68938e094510-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772566 4751 generic.go:334] "Generic (PLEG): container finished" podID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" exitCode=0 Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerDied","Data":"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee"} Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f18b5d57-5b05-4ef0-bae3-68938e094510","Type":"ContainerDied","Data":"a7dc563e23807f6efe79faed84ec9c2b00f86190217519d5f3838b56a30401b8"} Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772678 4751 scope.go:117] "RemoveContainer" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.772892 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.837401 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.853519 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.873493 4751 scope.go:117] "RemoveContainer" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.924517 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:57 crc kubenswrapper[4751]: E0130 21:44:57.941005 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="setup-container" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.941064 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="setup-container" Jan 30 21:44:57 crc kubenswrapper[4751]: E0130 21:44:57.941103 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.941112 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.944616 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" containerName="rabbitmq" Jan 30 21:44:57 crc kubenswrapper[4751]: I0130 21:44:57.964517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.003657 4751 scope.go:117] "RemoveContainer" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" Jan 30 21:44:58 crc kubenswrapper[4751]: E0130 21:44:58.006534 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee\": container with ID starting with fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee not found: ID does not exist" containerID="fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.006581 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee"} err="failed to get container status \"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee\": rpc error: code = NotFound desc = could not find container \"fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee\": container with ID starting with fc68458705d52ffe188e292effdc48adcef036c4730d9da5c5e79dd0196fceee not found: ID does not exist" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.006607 4751 scope.go:117] "RemoveContainer" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" Jan 30 21:44:58 crc kubenswrapper[4751]: E0130 21:44:58.042515 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256\": container with ID starting with 754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256 not found: ID does not exist" containerID="754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.042617 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256"} err="failed to get container status \"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256\": rpc error: code = NotFound desc = could not find container \"754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256\": container with ID starting with 754b224d6c0aec71e7dd9667dbd15b0273b071f0dbfebe749ccad88991070256 not found: ID does not exist" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.046430 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18b5d57-5b05-4ef0-bae3-68938e094510" path="/var/lib/kubelet/pods/f18b5d57-5b05-4ef0-bae3-68938e094510/volumes" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.047520 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071760 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.071921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcchl\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-kube-api-access-bcchl\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072116 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072257 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072280 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ab0c22c-f078-413c-ac94-9e543a02c3fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.072348 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ab0c22c-f078-413c-ac94-9e543a02c3fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.174688 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175170 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcchl\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-kube-api-access-bcchl\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175201 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175411 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ab0c22c-f078-413c-ac94-9e543a02c3fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ab0c22c-f078-413c-ac94-9e543a02c3fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175552 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.175761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.176153 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.177052 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.178200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.179801 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ab0c22c-f078-413c-ac94-9e543a02c3fb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.181906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.184166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ab0c22c-f078-413c-ac94-9e543a02c3fb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.184191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ab0c22c-f078-413c-ac94-9e543a02c3fb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.186361 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.186457 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4befba5b3452f215320be9365c178860d706182c1f41ab25a94828e6255d8c2/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.192213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.199096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcchl\" (UniqueName: \"kubernetes.io/projected/4ab0c22c-f078-413c-ac94-9e543a02c3fb-kube-api-access-bcchl\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.268984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99e8dcc7-eb80-4d4c-acee-9411b4d160ef\") pod \"rabbitmq-server-0\" (UID: \"4ab0c22c-f078-413c-ac94-9e543a02c3fb\") " pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.363371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 21:44:58 crc kubenswrapper[4751]: I0130 21:44:58.919925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 21:44:58 crc kubenswrapper[4751]: W0130 21:44:58.928559 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab0c22c_f078_413c_ac94_9e543a02c3fb.slice/crio-600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918 WatchSource:0}: Error finding container 600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918: Status 404 returned error can't find the container with id 600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918 Jan 30 21:44:59 crc kubenswrapper[4751]: I0130 21:44:59.794518 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerStarted","Data":"600261f7b61ee0f41837d949f8922af6fa8967574640325202cd0079522e6918"} Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.195090 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.199182 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.209649 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.209840 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.217940 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.331297 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.331476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.331537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.435854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.435939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.435965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.437050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.480720 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.481634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"collect-profiles-29496825-qqpqc\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:00 crc kubenswrapper[4751]: I0130 21:45:00.538362 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:01 crc kubenswrapper[4751]: W0130 21:45:01.016259 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60a5fa77_b23e_417a_9854_929675be1c58.slice/crio-cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e WatchSource:0}: Error finding container cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e: Status 404 returned error can't find the container with id cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.019371 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.829006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerStarted","Data":"228bb523988a04ece3190be6ec56bedbaf8c4a0b73cde269fd8686478b71db4d"} Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.831775 4751 generic.go:334] "Generic (PLEG): container finished" podID="60a5fa77-b23e-417a-9854-929675be1c58" containerID="a925b908937d8dd9436a4992fc297b882d7c680a8bb02a09739b64f2a561f95a" exitCode=0 Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.831827 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" event={"ID":"60a5fa77-b23e-417a-9854-929675be1c58","Type":"ContainerDied","Data":"a925b908937d8dd9436a4992fc297b882d7c680a8bb02a09739b64f2a561f95a"} Jan 30 21:45:01 crc kubenswrapper[4751]: I0130 21:45:01.831853 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" event={"ID":"60a5fa77-b23e-417a-9854-929675be1c58","Type":"ContainerStarted","Data":"cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e"} Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.319245 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.406622 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") pod \"60a5fa77-b23e-417a-9854-929675be1c58\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.406962 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") pod \"60a5fa77-b23e-417a-9854-929675be1c58\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.407137 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") pod \"60a5fa77-b23e-417a-9854-929675be1c58\" (UID: \"60a5fa77-b23e-417a-9854-929675be1c58\") " Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.407372 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume" (OuterVolumeSpecName: "config-volume") pod "60a5fa77-b23e-417a-9854-929675be1c58" (UID: "60a5fa77-b23e-417a-9854-929675be1c58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.408497 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a5fa77-b23e-417a-9854-929675be1c58-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.414150 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp" (OuterVolumeSpecName: "kube-api-access-vvzrp") pod "60a5fa77-b23e-417a-9854-929675be1c58" (UID: "60a5fa77-b23e-417a-9854-929675be1c58"). InnerVolumeSpecName "kube-api-access-vvzrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.414777 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60a5fa77-b23e-417a-9854-929675be1c58" (UID: "60a5fa77-b23e-417a-9854-929675be1c58"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.511875 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvzrp\" (UniqueName: \"kubernetes.io/projected/60a5fa77-b23e-417a-9854-929675be1c58-kube-api-access-vvzrp\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.511914 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a5fa77-b23e-417a-9854-929675be1c58-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.873089 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" event={"ID":"60a5fa77-b23e-417a-9854-929675be1c58","Type":"ContainerDied","Data":"cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e"} Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.873140 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdff7d7d0a81f7d5dee4f6ad01cc4d679c85e0453b551c0012ae37a4dc30d48e" Jan 30 21:45:03 crc kubenswrapper[4751]: I0130 21:45:03.873169 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc" Jan 30 21:45:23 crc kubenswrapper[4751]: I0130 21:45:23.989054 4751 scope.go:117] "RemoveContainer" containerID="bdd03488d3195a549fc04a34aab5bd9be42fab7815eccaedf690eaba2f311d80" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.035740 4751 scope.go:117] "RemoveContainer" containerID="5e15084dd70b42693552f9d64b22474ea93dd026e14a53bb39cd74bd8ba86b97" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.067502 4751 scope.go:117] "RemoveContainer" containerID="e3064bf78a4cd92d2b24a8bdce3402cc789ce700663ceec250dcf8768aa0ad5c" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.121128 4751 scope.go:117] "RemoveContainer" containerID="d9b252a19e1756dc14b8604eb4ec0d16757d20c0506507f763599f15997045f8" Jan 30 21:45:24 crc kubenswrapper[4751]: I0130 21:45:24.173245 4751 scope.go:117] "RemoveContainer" containerID="d9de83cadc3b076ba912dc65301ea8bc1d6d0414a32e18815fa439a9c91d4dfb" Jan 30 21:45:33 crc kubenswrapper[4751]: I0130 21:45:33.242397 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ab0c22c-f078-413c-ac94-9e543a02c3fb" containerID="228bb523988a04ece3190be6ec56bedbaf8c4a0b73cde269fd8686478b71db4d" exitCode=0 Jan 30 21:45:33 crc kubenswrapper[4751]: I0130 21:45:33.242490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerDied","Data":"228bb523988a04ece3190be6ec56bedbaf8c4a0b73cde269fd8686478b71db4d"} Jan 30 21:45:34 crc kubenswrapper[4751]: I0130 21:45:34.256754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4ab0c22c-f078-413c-ac94-9e543a02c3fb","Type":"ContainerStarted","Data":"d96b636201e112e943d14030291559f2a14d8aca6384012b51550acf798007fd"} Jan 30 21:45:34 crc kubenswrapper[4751]: I0130 21:45:34.259972 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 21:45:34 crc kubenswrapper[4751]: I0130 21:45:34.311775 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.311726568 podStartE2EDuration="37.311726568s" podCreationTimestamp="2026-01-30 21:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 21:45:34.282396657 +0000 UTC m=+1873.028219336" watchObservedRunningTime="2026-01-30 21:45:34.311726568 +0000 UTC m=+1873.057549217" Jan 30 21:45:48 crc kubenswrapper[4751]: I0130 21:45:48.365864 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 21:46:24 crc kubenswrapper[4751]: I0130 21:46:24.292993 4751 scope.go:117] "RemoveContainer" containerID="3f3a7ffb288bc415dc5b59baa7fb9c6b68e00f52800d342ad995dbc272c7f1bb" Jan 30 21:46:24 crc kubenswrapper[4751]: I0130 21:46:24.335737 4751 scope.go:117] "RemoveContainer" containerID="cf22f8655eccc32aa59cef7b29c129725319b4f4f4da7c51fdef15993d0d2382" Jan 30 21:46:24 crc kubenswrapper[4751]: I0130 21:46:24.383733 4751 scope.go:117] "RemoveContainer" containerID="dc31a52f5646a180bf7d41d4f12f928e7c63335be084922d3c985b8fa786c23e" Jan 30 21:46:43 crc kubenswrapper[4751]: I0130 21:46:43.052904 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:46:43 crc kubenswrapper[4751]: I0130 21:46:43.072042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mxcnd"] Jan 30 21:46:44 crc kubenswrapper[4751]: I0130 21:46:44.000055 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5" path="/var/lib/kubelet/pods/ad7304ef-4416-4a1b-a5f1-4fdaae1f5aa5/volumes" Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.077995 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.108848 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.126934 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.148111 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.180428 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.194383 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a004-account-create-update-zkpzg"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.207735 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1da7-account-create-update-q9cg8"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.218971 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fed1-account-create-update-ztdkt"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.231871 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-p9lfn"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.243386 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-hgg7b"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.256431 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.270850 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-dd31-account-create-update-4hlqb"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.281553 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:46:50 crc kubenswrapper[4751]: I0130 21:46:50.294712 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7tt6b"] Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.992022 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f348fb-7f83-40db-98b2-7e8bc603a3e6" path="/var/lib/kubelet/pods/37f348fb-7f83-40db-98b2-7e8bc603a3e6/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.993080 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbc6a33-d240-4982-ade1-668f5da8b516" path="/var/lib/kubelet/pods/4fbc6a33-d240-4982-ade1-668f5da8b516/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.993971 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55099194-6cb2-437d-ae0d-a08c104de380" path="/var/lib/kubelet/pods/55099194-6cb2-437d-ae0d-a08c104de380/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.994836 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722402dd-bf51-47a6-b20e-85aec93527d9" path="/var/lib/kubelet/pods/722402dd-bf51-47a6-b20e-85aec93527d9/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.997841 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93341dcd-a293-4879-8baf-855556383780" path="/var/lib/kubelet/pods/93341dcd-a293-4879-8baf-855556383780/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.998665 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1373e37-3653-4f5d-9978-9d1cca4e546b" path="/var/lib/kubelet/pods/d1373e37-3653-4f5d-9978-9d1cca4e546b/volumes" Jan 30 21:46:51 crc kubenswrapper[4751]: I0130 21:46:51.999576 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2" path="/var/lib/kubelet/pods/e3a5d5df-fe19-49f0-b82a-afbe70b4c9f2/volumes" Jan 30 21:46:54 crc kubenswrapper[4751]: I0130 21:46:54.127237 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:46:54 crc kubenswrapper[4751]: I0130 21:46:54.128033 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.040659 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.059479 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.074075 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-29gtt"] Jan 30 21:46:58 crc kubenswrapper[4751]: I0130 21:46:58.088603 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-e51b-account-create-update-bskb2"] Jan 30 21:46:59 crc kubenswrapper[4751]: I0130 21:46:59.991319 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2ec939-8595-4611-a636-a46fffaa8ebf" path="/var/lib/kubelet/pods/1f2ec939-8595-4611-a636-a46fffaa8ebf/volumes" Jan 30 21:46:59 crc kubenswrapper[4751]: I0130 21:46:59.992807 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36824ca-c5a8-4514-9276-e49126a66018" path="/var/lib/kubelet/pods/d36824ca-c5a8-4514-9276-e49126a66018/volumes" Jan 30 21:47:00 crc kubenswrapper[4751]: I0130 21:47:00.028499 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:47:00 crc kubenswrapper[4751]: I0130 21:47:00.043773 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2xmqz"] Jan 30 21:47:01 crc kubenswrapper[4751]: I0130 21:47:01.992247 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ed4323-ecba-4f90-b7ea-a5a0ff7713d6" path="/var/lib/kubelet/pods/27ed4323-ecba-4f90-b7ea-a5a0ff7713d6/volumes" Jan 30 21:47:21 crc kubenswrapper[4751]: I0130 21:47:21.050521 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:47:21 crc kubenswrapper[4751]: I0130 21:47:21.066133 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-r8wrn"] Jan 30 21:47:21 crc kubenswrapper[4751]: I0130 21:47:21.993149 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a93444-0221-40b7-9869-428788112ae2" path="/var/lib/kubelet/pods/32a93444-0221-40b7-9869-428788112ae2/volumes" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.126400 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.126479 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.520702 4751 scope.go:117] "RemoveContainer" containerID="c18d43f25fad540cc4b6980ee198b0b5113db4829b6825bf308264ef91e01601" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.569307 4751 scope.go:117] "RemoveContainer" containerID="0309515ef606ff55b4a18e80ad5013912740e27a1539e1146826375f54a2b553" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.647412 4751 scope.go:117] "RemoveContainer" containerID="327aabac3be4ee9fde091b36b1b374aaf9d59f04f57b4504442450704eca0e64" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.702247 4751 scope.go:117] "RemoveContainer" containerID="a4c6ded6bcebfebc69538cc39c7d557a4894403ba3b9a46406bf7a54b2fb9107" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.746590 4751 scope.go:117] "RemoveContainer" containerID="47ae5041500feb907ed9d9736f2e4bbce3e444b85130301585ffd13ba081d9a9" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.807397 4751 scope.go:117] "RemoveContainer" containerID="73515e94e6f7a825d6c9ac37458f6d6de21c87de5edaea4b69d38594e2145bf0" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.875372 4751 scope.go:117] "RemoveContainer" containerID="291bba4a83a01e50f5b8260a72f7589443ccaf7ad2482ecfd294e283e08c6b24" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.907263 4751 scope.go:117] "RemoveContainer" containerID="3d408c3254750e92d426d8cded49880995124a210d5a1b2ed7f46112cc91e938" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.934042 4751 scope.go:117] "RemoveContainer" containerID="1b5908039e6b19df93f09b06f432ee6033fa0e6a44029f167f2bd610adfb389f" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.956440 4751 scope.go:117] "RemoveContainer" containerID="d6a6c7d319a747790016da0d8bf07d4ab98c3d010eb7ce4cdc966d03c722da28" Jan 30 21:47:24 crc kubenswrapper[4751]: I0130 21:47:24.985507 4751 scope.go:117] "RemoveContainer" containerID="0b6fa7471c097e1e891323221bf11b4c99ace89cd782cbdf349cf6bb9189e783" Jan 30 21:47:25 crc kubenswrapper[4751]: I0130 21:47:25.009700 4751 scope.go:117] "RemoveContainer" containerID="6d875e7e116aae53e17490ae6ad2fcbb1e85d4c6ca0051daa64edc6f242dd628" Jan 30 21:47:25 crc kubenswrapper[4751]: I0130 21:47:25.032999 4751 scope.go:117] "RemoveContainer" containerID="963c152112c095b417af6d89f95dac5ff1eb3a21950942a6d257f3fa15a08da7" Jan 30 21:47:30 crc kubenswrapper[4751]: I0130 21:47:30.603185 4751 generic.go:334] "Generic (PLEG): container finished" podID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerID="5fb425b25c8902fe60e5dcd58df1f879542305f303c7a43c344cbd78332f0ba4" exitCode=0 Jan 30 21:47:30 crc kubenswrapper[4751]: I0130 21:47:30.603227 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerDied","Data":"5fb425b25c8902fe60e5dcd58df1f879542305f303c7a43c344cbd78332f0ba4"} Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.040585 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.063452 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.081698 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zhgsw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.092548 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0f07-account-create-update-fr6kw"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.103594 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.114403 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.124065 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-2618-account-create-update-fdl95"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.132908 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-31bb-account-create-update-w6h5f"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.142856 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.153028 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.176252 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.187496 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-hr9lv"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.198953 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f7f7-account-create-update-d88cz"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.210147 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2gxmh"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.220356 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.230502 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lqv47"] Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.990146 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00437219-cb6b-48ad-a0cb-d75b82412ba1" path="/var/lib/kubelet/pods/00437219-cb6b-48ad-a0cb-d75b82412ba1/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.991030 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0297c6e3-62f8-49cc-a073-8bb104949456" path="/var/lib/kubelet/pods/0297c6e3-62f8-49cc-a073-8bb104949456/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.991818 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056813ab-3913-42db-afa1-a79cb8e3a3c9" path="/var/lib/kubelet/pods/056813ab-3913-42db-afa1-a79cb8e3a3c9/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.992616 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9f9eed-02b1-4541-8ebb-34826639233b" path="/var/lib/kubelet/pods/3b9f9eed-02b1-4541-8ebb-34826639233b/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.994036 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c" path="/var/lib/kubelet/pods/5f3bf7e5-bd0d-46f0-b5bf-86fe9e6c428c/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.994885 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9112f9c-911e-47d4-be64-e6f90fa6fa35" path="/var/lib/kubelet/pods/a9112f9c-911e-47d4-be64-e6f90fa6fa35/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.995980 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1f702d-7084-4e85-add9-15c10223d801" path="/var/lib/kubelet/pods/bf1f702d-7084-4e85-add9-15c10223d801/volumes" Jan 30 21:47:31 crc kubenswrapper[4751]: I0130 21:47:31.997671 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63e6079-6772-46c3-9ec3-1e01741a210f" path="/var/lib/kubelet/pods/e63e6079-6772-46c3-9ec3-1e01741a210f/volumes" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.099719 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303494 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.303551 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") pod \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\" (UID: \"25d1f8e8-75ed-46ae-b674-87f34c4edbfa\") " Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.309405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.310659 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb" (OuterVolumeSpecName: "kube-api-access-x6xjb") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "kube-api-access-x6xjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.335276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.345559 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory" (OuterVolumeSpecName: "inventory") pod "25d1f8e8-75ed-46ae-b674-87f34c4edbfa" (UID: "25d1f8e8-75ed-46ae-b674-87f34c4edbfa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.406920 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.406993 4751 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.407021 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6xjb\" (UniqueName: \"kubernetes.io/projected/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-kube-api-access-x6xjb\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.407046 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25d1f8e8-75ed-46ae-b674-87f34c4edbfa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.637915 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" event={"ID":"25d1f8e8-75ed-46ae-b674-87f34c4edbfa","Type":"ContainerDied","Data":"51f8374b96e74508b8ea161ccefb7ec2d95c6112bacdf4605ef5155ad9ff2a2e"} Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.638040 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f8374b96e74508b8ea161ccefb7ec2d95c6112bacdf4605ef5155ad9ff2a2e" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.638789 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.732224 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p"] Jan 30 21:47:32 crc kubenswrapper[4751]: E0130 21:47:32.732963 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a5fa77-b23e-417a-9854-929675be1c58" containerName="collect-profiles" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.732997 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a5fa77-b23e-417a-9854-929675be1c58" containerName="collect-profiles" Jan 30 21:47:32 crc kubenswrapper[4751]: E0130 21:47:32.733030 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.733041 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.733249 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a5fa77-b23e-417a-9854-929675be1c58" containerName="collect-profiles" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.733269 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d1f8e8-75ed-46ae-b674-87f34c4edbfa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.734331 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.738768 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.739074 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.739130 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.739495 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.749898 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p"] Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.922891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.923009 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:32 crc kubenswrapper[4751]: I0130 21:47:32.923175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.025884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.026099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.026297 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.030577 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.042629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.042962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-czw8p\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.054281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.621408 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p"] Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.627929 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:47:33 crc kubenswrapper[4751]: I0130 21:47:33.655260 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerStarted","Data":"871112c40c834fc0ee43096e7b64ddfc12d71cae78e7a64ab8d6c06bfe6ebb40"} Jan 30 21:47:35 crc kubenswrapper[4751]: I0130 21:47:35.681797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerStarted","Data":"c30ac0e8be6c0815dba51db8d797c69e9ca710b0af1c400a4c5b3a4c953e6ebf"} Jan 30 21:47:35 crc kubenswrapper[4751]: I0130 21:47:35.700629 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" podStartSLOduration=2.7271200909999997 podStartE2EDuration="3.700613936s" podCreationTimestamp="2026-01-30 21:47:32 +0000 UTC" firstStartedPulling="2026-01-30 21:47:33.62774096 +0000 UTC m=+1992.373563609" lastFinishedPulling="2026-01-30 21:47:34.601234795 +0000 UTC m=+1993.347057454" observedRunningTime="2026-01-30 21:47:35.699729232 +0000 UTC m=+1994.445551901" watchObservedRunningTime="2026-01-30 21:47:35.700613936 +0000 UTC m=+1994.446436585" Jan 30 21:47:37 crc kubenswrapper[4751]: I0130 21:47:37.040317 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:47:37 crc kubenswrapper[4751]: I0130 21:47:37.049281 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z99cv"] Jan 30 21:47:37 crc kubenswrapper[4751]: I0130 21:47:37.999052 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc5d80d-ae17-431d-8e0f-6003af0fa6b1" path="/var/lib/kubelet/pods/0bc5d80d-ae17-431d-8e0f-6003af0fa6b1/volumes" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.127061 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.127758 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.127814 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.128730 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.128799 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0" gracePeriod=600 Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906406 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0" exitCode=0 Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906608 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0"} Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906972 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9"} Jan 30 21:47:54 crc kubenswrapper[4751]: I0130 21:47:54.906997 4751 scope.go:117] "RemoveContainer" containerID="210c4c83fb270c1e3116159d2d87cdeb3fb07a6f1463bbb392fc549f436a2f88" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.353000 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.356731 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.369724 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.556401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.556577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.556623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659175 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.659915 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.660031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.699272 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"certified-operators-c4tsm\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:08 crc kubenswrapper[4751]: I0130 21:48:08.991865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:09 crc kubenswrapper[4751]: I0130 21:48:09.514277 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.082237 4751 generic.go:334] "Generic (PLEG): container finished" podID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" exitCode=0 Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.082292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c"} Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.082621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerStarted","Data":"7e9e07c3f9270f65741e77482a4cd10ab663916efa79256ff1798d504bbcfbbc"} Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.692535 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.695445 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.714143 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.730595 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.730895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.731005 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.832836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.832948 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.832990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.834050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.834075 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:10 crc kubenswrapper[4751]: I0130 21:48:10.854049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"redhat-operators-6xw94\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.040252 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.048700 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.062042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lwm4t"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.097308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerStarted","Data":"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240"} Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.299198 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.303564 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.320639 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.448687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.449032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.449373 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.550998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551612 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.551965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.555046 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.580263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"community-operators-8jb6x\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.636679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:11 crc kubenswrapper[4751]: I0130 21:48:11.997749 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42b4031-ca3e-4b28-b62a-eb346132dc3a" path="/var/lib/kubelet/pods/d42b4031-ca3e-4b28-b62a-eb346132dc3a/volumes" Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.109272 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" exitCode=0 Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.111310 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c"} Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.111389 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerStarted","Data":"e20468a60c3ca009f382cf700c3f7365eee5860b0272834ef0e7ae4dfae413e1"} Jan 30 21:48:12 crc kubenswrapper[4751]: I0130 21:48:12.209309 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:12 crc kubenswrapper[4751]: W0130 21:48:12.214971 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602e9b0f_e15c_4855_a0e5_942f2f37f030.slice/crio-8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b WatchSource:0}: Error finding container 8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b: Status 404 returned error can't find the container with id 8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b Jan 30 21:48:13 crc kubenswrapper[4751]: I0130 21:48:13.122344 4751 generic.go:334] "Generic (PLEG): container finished" podID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" exitCode=0 Jan 30 21:48:13 crc kubenswrapper[4751]: I0130 21:48:13.122731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659"} Jan 30 21:48:13 crc kubenswrapper[4751]: I0130 21:48:13.122756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerStarted","Data":"8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b"} Jan 30 21:48:14 crc kubenswrapper[4751]: I0130 21:48:14.137969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerStarted","Data":"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb"} Jan 30 21:48:14 crc kubenswrapper[4751]: I0130 21:48:14.141549 4751 generic.go:334] "Generic (PLEG): container finished" podID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" exitCode=0 Jan 30 21:48:14 crc kubenswrapper[4751]: I0130 21:48:14.141608 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240"} Jan 30 21:48:15 crc kubenswrapper[4751]: I0130 21:48:15.154848 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerStarted","Data":"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f"} Jan 30 21:48:16 crc kubenswrapper[4751]: I0130 21:48:16.169484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerStarted","Data":"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b"} Jan 30 21:48:16 crc kubenswrapper[4751]: I0130 21:48:16.190253 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4tsm" podStartSLOduration=3.450087365 podStartE2EDuration="8.190232213s" podCreationTimestamp="2026-01-30 21:48:08 +0000 UTC" firstStartedPulling="2026-01-30 21:48:10.085212299 +0000 UTC m=+2028.831034958" lastFinishedPulling="2026-01-30 21:48:14.825357147 +0000 UTC m=+2033.571179806" observedRunningTime="2026-01-30 21:48:16.189217205 +0000 UTC m=+2034.935039864" watchObservedRunningTime="2026-01-30 21:48:16.190232213 +0000 UTC m=+2034.936054862" Jan 30 21:48:18 crc kubenswrapper[4751]: I0130 21:48:18.992910 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:18 crc kubenswrapper[4751]: I0130 21:48:18.993463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:19 crc kubenswrapper[4751]: I0130 21:48:19.207107 4751 generic.go:334] "Generic (PLEG): container finished" podID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" exitCode=0 Jan 30 21:48:19 crc kubenswrapper[4751]: I0130 21:48:19.207152 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f"} Jan 30 21:48:20 crc kubenswrapper[4751]: I0130 21:48:20.047259 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-c4tsm" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:20 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:20 crc kubenswrapper[4751]: > Jan 30 21:48:20 crc kubenswrapper[4751]: I0130 21:48:20.220831 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerStarted","Data":"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d"} Jan 30 21:48:20 crc kubenswrapper[4751]: I0130 21:48:20.243745 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jb6x" podStartSLOduration=2.735396521 podStartE2EDuration="9.243726599s" podCreationTimestamp="2026-01-30 21:48:11 +0000 UTC" firstStartedPulling="2026-01-30 21:48:13.125770103 +0000 UTC m=+2031.871592752" lastFinishedPulling="2026-01-30 21:48:19.634100181 +0000 UTC m=+2038.379922830" observedRunningTime="2026-01-30 21:48:20.241130519 +0000 UTC m=+2038.986953188" watchObservedRunningTime="2026-01-30 21:48:20.243726599 +0000 UTC m=+2038.989549248" Jan 30 21:48:21 crc kubenswrapper[4751]: I0130 21:48:21.636995 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:21 crc kubenswrapper[4751]: I0130 21:48:21.637042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:22 crc kubenswrapper[4751]: I0130 21:48:22.242471 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" exitCode=0 Jan 30 21:48:22 crc kubenswrapper[4751]: I0130 21:48:22.242537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb"} Jan 30 21:48:22 crc kubenswrapper[4751]: I0130 21:48:22.707646 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jb6x" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:22 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:22 crc kubenswrapper[4751]: > Jan 30 21:48:23 crc kubenswrapper[4751]: I0130 21:48:23.257622 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerStarted","Data":"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0"} Jan 30 21:48:23 crc kubenswrapper[4751]: I0130 21:48:23.282459 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xw94" podStartSLOduration=2.693903216 podStartE2EDuration="13.282442722s" podCreationTimestamp="2026-01-30 21:48:10 +0000 UTC" firstStartedPulling="2026-01-30 21:48:12.113092258 +0000 UTC m=+2030.858914907" lastFinishedPulling="2026-01-30 21:48:22.701631764 +0000 UTC m=+2041.447454413" observedRunningTime="2026-01-30 21:48:23.280539081 +0000 UTC m=+2042.026361730" watchObservedRunningTime="2026-01-30 21:48:23.282442722 +0000 UTC m=+2042.028265371" Jan 30 21:48:24 crc kubenswrapper[4751]: I0130 21:48:24.038764 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:48:24 crc kubenswrapper[4751]: I0130 21:48:24.060629 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v9spg"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.041217 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.055524 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.071102 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-89chj"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.083291 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rt7v2"] Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.331191 4751 scope.go:117] "RemoveContainer" containerID="dde509ef6f207cc2bcc76a35805e737a06489616a2c06460edf270c4d46949ff" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.361182 4751 scope.go:117] "RemoveContainer" containerID="c5ab688f9b8e1fb82010bd34dac14cc2f514cc43545c635a532a50efe0bee3a6" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.433629 4751 scope.go:117] "RemoveContainer" containerID="86dc09eda61ac7de53bc29716e31ede7719959b2e5920e15b3c99ca75f4be060" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.500292 4751 scope.go:117] "RemoveContainer" containerID="5e236b245c56a064616f5c0cfe68da26d9003a62ee339d2b96a7cc68c86cbcf4" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.560947 4751 scope.go:117] "RemoveContainer" containerID="757678878d640ed42bebe096fefc08e81d2dc4fdaa39596d495dfc07a6e988a4" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.624622 4751 scope.go:117] "RemoveContainer" containerID="51523e8d2edcc2046cb1a83c98d7a2fbd7964b697b149674907f1751f57faefe" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.686598 4751 scope.go:117] "RemoveContainer" containerID="9b73d59359bfb3a5bef8ccdbc1b9174270c6e66e22c29e992c6a512a45cd76ed" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.717081 4751 scope.go:117] "RemoveContainer" containerID="3f232e6698625cc60ad1770425a3662d4b2453997f82a2581cabc9a30c379df0" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.742431 4751 scope.go:117] "RemoveContainer" containerID="081f6f9ef52a04848276eea3741fadf9bc134d70d5112e179f163c9ecb46984e" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.767281 4751 scope.go:117] "RemoveContainer" containerID="dbe7739dccd34474fee5592432c44f2757e5e43cc8cb53f953f6011cf0eab9eb" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.991739 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8555e0d7-6d06-4edb-b463-86f7bf829949" path="/var/lib/kubelet/pods/8555e0d7-6d06-4edb-b463-86f7bf829949/volumes" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.992778 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90f6a78-a996-49f8-a567-d2699c737d1f" path="/var/lib/kubelet/pods/a90f6a78-a996-49f8-a567-d2699c737d1f/volumes" Jan 30 21:48:25 crc kubenswrapper[4751]: I0130 21:48:25.993474 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c714e3-2147-4f8a-97cd-2e62e0f3a955" path="/var/lib/kubelet/pods/b4c714e3-2147-4f8a-97cd-2e62e0f3a955/volumes" Jan 30 21:48:30 crc kubenswrapper[4751]: I0130 21:48:30.046624 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-c4tsm" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:30 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:30 crc kubenswrapper[4751]: > Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.041486 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.041901 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.687924 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.737182 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:31 crc kubenswrapper[4751]: I0130 21:48:31.931438 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:32 crc kubenswrapper[4751]: I0130 21:48:32.090784 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:32 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:32 crc kubenswrapper[4751]: > Jan 30 21:48:33 crc kubenswrapper[4751]: I0130 21:48:33.349500 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jb6x" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" containerID="cri-o://2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" gracePeriod=2 Jan 30 21:48:33 crc kubenswrapper[4751]: I0130 21:48:33.931097 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.031431 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") pod \"602e9b0f-e15c-4855-a0e5-942f2f37f030\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.031557 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") pod \"602e9b0f-e15c-4855-a0e5-942f2f37f030\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.031655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") pod \"602e9b0f-e15c-4855-a0e5-942f2f37f030\" (UID: \"602e9b0f-e15c-4855-a0e5-942f2f37f030\") " Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.034669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities" (OuterVolumeSpecName: "utilities") pod "602e9b0f-e15c-4855-a0e5-942f2f37f030" (UID: "602e9b0f-e15c-4855-a0e5-942f2f37f030"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.038401 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk" (OuterVolumeSpecName: "kube-api-access-bcjxk") pod "602e9b0f-e15c-4855-a0e5-942f2f37f030" (UID: "602e9b0f-e15c-4855-a0e5-942f2f37f030"). InnerVolumeSpecName "kube-api-access-bcjxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.107511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602e9b0f-e15c-4855-a0e5-942f2f37f030" (UID: "602e9b0f-e15c-4855-a0e5-942f2f37f030"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.133963 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcjxk\" (UniqueName: \"kubernetes.io/projected/602e9b0f-e15c-4855-a0e5-942f2f37f030-kube-api-access-bcjxk\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.133999 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.134008 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602e9b0f-e15c-4855-a0e5-942f2f37f030-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360862 4751 generic.go:334] "Generic (PLEG): container finished" podID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" exitCode=0 Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360927 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d"} Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360938 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jb6x" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360967 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jb6x" event={"ID":"602e9b0f-e15c-4855-a0e5-942f2f37f030","Type":"ContainerDied","Data":"8552e2294dac82fcbf0637ad97d4993ccc8c29e4682eee817b2f69ea8c621a0b"} Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.360985 4751 scope.go:117] "RemoveContainer" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.384414 4751 scope.go:117] "RemoveContainer" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.400157 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.410863 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jb6x"] Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.417704 4751 scope.go:117] "RemoveContainer" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.490106 4751 scope.go:117] "RemoveContainer" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" Jan 30 21:48:34 crc kubenswrapper[4751]: E0130 21:48:34.495413 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d\": container with ID starting with 2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d not found: ID does not exist" containerID="2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.495456 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d"} err="failed to get container status \"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d\": rpc error: code = NotFound desc = could not find container \"2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d\": container with ID starting with 2bbc9843de3af408ee3ac6f7e6ea53921b34758a2a248bc31fef67534e126d3d not found: ID does not exist" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.495673 4751 scope.go:117] "RemoveContainer" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" Jan 30 21:48:34 crc kubenswrapper[4751]: E0130 21:48:34.496240 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f\": container with ID starting with 8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f not found: ID does not exist" containerID="8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.496287 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f"} err="failed to get container status \"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f\": rpc error: code = NotFound desc = could not find container \"8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f\": container with ID starting with 8b583759b6badc88d1b33fd7f2ce5ded4fedddafaece7ba7029511f4fae80f0f not found: ID does not exist" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.496324 4751 scope.go:117] "RemoveContainer" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" Jan 30 21:48:34 crc kubenswrapper[4751]: E0130 21:48:34.496897 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659\": container with ID starting with c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659 not found: ID does not exist" containerID="c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659" Jan 30 21:48:34 crc kubenswrapper[4751]: I0130 21:48:34.496922 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659"} err="failed to get container status \"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659\": rpc error: code = NotFound desc = could not find container \"c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659\": container with ID starting with c10dff2ac0b8ab462c8bb2c1562ed0a3fc23fecfc917b1232799551324667659 not found: ID does not exist" Jan 30 21:48:35 crc kubenswrapper[4751]: I0130 21:48:35.988525 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" path="/var/lib/kubelet/pods/602e9b0f-e15c-4855-a0e5-942f2f37f030/volumes" Jan 30 21:48:39 crc kubenswrapper[4751]: I0130 21:48:39.057024 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:39 crc kubenswrapper[4751]: I0130 21:48:39.129461 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:39 crc kubenswrapper[4751]: I0130 21:48:39.501181 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:40 crc kubenswrapper[4751]: I0130 21:48:40.418181 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4tsm" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" containerID="cri-o://d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" gracePeriod=2 Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.038728 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.048319 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.058547 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-bq6lp"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.161481 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") pod \"329634a4-1673-4657-a0fb-bbf17bfc55c7\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.161633 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") pod \"329634a4-1673-4657-a0fb-bbf17bfc55c7\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.161738 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") pod \"329634a4-1673-4657-a0fb-bbf17bfc55c7\" (UID: \"329634a4-1673-4657-a0fb-bbf17bfc55c7\") " Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.162599 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities" (OuterVolumeSpecName: "utilities") pod "329634a4-1673-4657-a0fb-bbf17bfc55c7" (UID: "329634a4-1673-4657-a0fb-bbf17bfc55c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.163758 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.167553 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858" (OuterVolumeSpecName: "kube-api-access-kd858") pod "329634a4-1673-4657-a0fb-bbf17bfc55c7" (UID: "329634a4-1673-4657-a0fb-bbf17bfc55c7"). InnerVolumeSpecName "kube-api-access-kd858". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.218371 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "329634a4-1673-4657-a0fb-bbf17bfc55c7" (UID: "329634a4-1673-4657-a0fb-bbf17bfc55c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.266253 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329634a4-1673-4657-a0fb-bbf17bfc55c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.266299 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd858\" (UniqueName: \"kubernetes.io/projected/329634a4-1673-4657-a0fb-bbf17bfc55c7-kube-api-access-kd858\") on node \"crc\" DevicePath \"\"" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431248 4751 generic.go:334] "Generic (PLEG): container finished" podID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" exitCode=0 Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431307 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4tsm" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b"} Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4tsm" event={"ID":"329634a4-1673-4657-a0fb-bbf17bfc55c7","Type":"ContainerDied","Data":"7e9e07c3f9270f65741e77482a4cd10ab663916efa79256ff1798d504bbcfbbc"} Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.431449 4751 scope.go:117] "RemoveContainer" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.463244 4751 scope.go:117] "RemoveContainer" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.468124 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.478631 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4tsm"] Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.491696 4751 scope.go:117] "RemoveContainer" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.547682 4751 scope.go:117] "RemoveContainer" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" Jan 30 21:48:41 crc kubenswrapper[4751]: E0130 21:48:41.548396 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b\": container with ID starting with d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b not found: ID does not exist" containerID="d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.548464 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b"} err="failed to get container status \"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b\": rpc error: code = NotFound desc = could not find container \"d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b\": container with ID starting with d3ca4235fe0b6e079743a9f3e8a970c463b6fb7d8c2a61a5085e86cba4563b8b not found: ID does not exist" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.548511 4751 scope.go:117] "RemoveContainer" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" Jan 30 21:48:41 crc kubenswrapper[4751]: E0130 21:48:41.548927 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240\": container with ID starting with 79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240 not found: ID does not exist" containerID="79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.548982 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240"} err="failed to get container status \"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240\": rpc error: code = NotFound desc = could not find container \"79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240\": container with ID starting with 79084a63ba3f9f58d27216106e2c74ff7f3f05d66c94f99b66213d22feca3240 not found: ID does not exist" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.549011 4751 scope.go:117] "RemoveContainer" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" Jan 30 21:48:41 crc kubenswrapper[4751]: E0130 21:48:41.549324 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c\": container with ID starting with a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c not found: ID does not exist" containerID="a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.549387 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c"} err="failed to get container status \"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c\": rpc error: code = NotFound desc = could not find container \"a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c\": container with ID starting with a39d56e573ead34e5ea4a73cdb3f1ebb2184c4cba2db499a09a43670ec878f5c not found: ID does not exist" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.990097 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" path="/var/lib/kubelet/pods/329634a4-1673-4657-a0fb-bbf17bfc55c7/volumes" Jan 30 21:48:41 crc kubenswrapper[4751]: I0130 21:48:41.991067 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564f3d8f-4b9f-4fe2-9464-baa31d6b7d24" path="/var/lib/kubelet/pods/564f3d8f-4b9f-4fe2-9464-baa31d6b7d24/volumes" Jan 30 21:48:42 crc kubenswrapper[4751]: I0130 21:48:42.106100 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:42 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:42 crc kubenswrapper[4751]: > Jan 30 21:48:52 crc kubenswrapper[4751]: I0130 21:48:52.092750 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:48:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:48:52 crc kubenswrapper[4751]: > Jan 30 21:49:02 crc kubenswrapper[4751]: I0130 21:49:02.087803 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" probeResult="failure" output=< Jan 30 21:49:02 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 21:49:02 crc kubenswrapper[4751]: > Jan 30 21:49:11 crc kubenswrapper[4751]: I0130 21:49:11.097177 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:11 crc kubenswrapper[4751]: I0130 21:49:11.151292 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:11 crc kubenswrapper[4751]: I0130 21:49:11.342161 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:49:12 crc kubenswrapper[4751]: I0130 21:49:12.876401 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xw94" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" containerID="cri-o://af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" gracePeriod=2 Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.477688 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.558830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") pod \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.558965 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") pod \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.559199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") pod \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\" (UID: \"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813\") " Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.563000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities" (OuterVolumeSpecName: "utilities") pod "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" (UID: "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.565066 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7" (OuterVolumeSpecName: "kube-api-access-kj8c7") pod "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" (UID: "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813"). InnerVolumeSpecName "kube-api-access-kj8c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.663569 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.663598 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj8c7\" (UniqueName: \"kubernetes.io/projected/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-kube-api-access-kj8c7\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.679304 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" (UID: "3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.765920 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887699 4751 generic.go:334] "Generic (PLEG): container finished" podID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" exitCode=0 Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887741 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0"} Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887767 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xw94" event={"ID":"3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813","Type":"ContainerDied","Data":"e20468a60c3ca009f382cf700c3f7365eee5860b0272834ef0e7ae4dfae413e1"} Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887773 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xw94" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.887786 4751 scope.go:117] "RemoveContainer" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.919251 4751 scope.go:117] "RemoveContainer" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.938696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.949189 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xw94"] Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.960745 4751 scope.go:117] "RemoveContainer" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" Jan 30 21:49:13 crc kubenswrapper[4751]: I0130 21:49:13.990865 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" path="/var/lib/kubelet/pods/3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813/volumes" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.005385 4751 scope.go:117] "RemoveContainer" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" Jan 30 21:49:14 crc kubenswrapper[4751]: E0130 21:49:14.005806 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0\": container with ID starting with af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0 not found: ID does not exist" containerID="af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.005841 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0"} err="failed to get container status \"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0\": rpc error: code = NotFound desc = could not find container \"af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0\": container with ID starting with af46ee9eb151afcf0c3abe6e22364af64013d0a98fb7de81278f20213eec90a0 not found: ID does not exist" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.005866 4751 scope.go:117] "RemoveContainer" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" Jan 30 21:49:14 crc kubenswrapper[4751]: E0130 21:49:14.006453 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb\": container with ID starting with 9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb not found: ID does not exist" containerID="9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.006503 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb"} err="failed to get container status \"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb\": rpc error: code = NotFound desc = could not find container \"9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb\": container with ID starting with 9a9c4e80a3a6874da45849277058a1a0698619fd82837be7ed63ab033aaeb0fb not found: ID does not exist" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.006535 4751 scope.go:117] "RemoveContainer" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" Jan 30 21:49:14 crc kubenswrapper[4751]: E0130 21:49:14.006842 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c\": container with ID starting with f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c not found: ID does not exist" containerID="f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.006879 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c"} err="failed to get container status \"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c\": rpc error: code = NotFound desc = could not find container \"f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c\": container with ID starting with f98f144d80d133e424f24069c280712338540cdd5d832e0eafae5a98e859041c not found: ID does not exist" Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.046510 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:49:14 crc kubenswrapper[4751]: I0130 21:49:14.060577 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cdda-account-create-update-xfmk4"] Jan 30 21:49:15 crc kubenswrapper[4751]: I0130 21:49:15.990773 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7625d34-2ace-4774-89e4-72729d19ce99" path="/var/lib/kubelet/pods/f7625d34-2ace-4774-89e4-72729d19ce99/volumes" Jan 30 21:49:22 crc kubenswrapper[4751]: I0130 21:49:22.048058 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:49:22 crc kubenswrapper[4751]: I0130 21:49:22.059675 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6d52w"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.045248 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.059743 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.075459 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.085847 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-kj2ld"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.097268 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6e74-account-create-update-gdfb4"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.107129 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hx7xn"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.137141 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.150863 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2281-account-create-update-5l5m8"] Jan 30 21:49:23 crc kubenswrapper[4751]: I0130 21:49:23.999884 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444e34d6-7904-405b-956e-d23aed56537e" path="/var/lib/kubelet/pods/444e34d6-7904-405b-956e-d23aed56537e/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.000981 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f139e0b-3ae5-4d5c-aa87-f15d00373f98" path="/var/lib/kubelet/pods/6f139e0b-3ae5-4d5c-aa87-f15d00373f98/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.001627 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a169fb7b-bcf8-44d8-8942-a42a4de6001d" path="/var/lib/kubelet/pods/a169fb7b-bcf8-44d8-8942-a42a4de6001d/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.002430 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8312bae-69c5-4c31-844e-42a90c18bfd3" path="/var/lib/kubelet/pods/a8312bae-69c5-4c31-844e-42a90c18bfd3/volumes" Jan 30 21:49:24 crc kubenswrapper[4751]: I0130 21:49:24.003831 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5" path="/var/lib/kubelet/pods/bb9b83cb-2d2f-454e-bc55-4b9207e1bfc5/volumes" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.078601 4751 scope.go:117] "RemoveContainer" containerID="819319f0811868394aa97eff76f3853ec44f21bc4e3fff54753bf1a73c6cb040" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.120130 4751 scope.go:117] "RemoveContainer" containerID="bab22938ba50080dd9d55dae8178cb60ae0c855052ff172ac9ad37da3248c397" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.178424 4751 scope.go:117] "RemoveContainer" containerID="1b2d27c5fa8a33163c2a6acc216d5d997d31face25a7d5b27edce913d857e2cf" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.224540 4751 scope.go:117] "RemoveContainer" containerID="b0651f2072f5243ce1ec548bf97964b55b91bb1b69f6154e95b941b6b4ae52c4" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.280657 4751 scope.go:117] "RemoveContainer" containerID="95a526791a15478d3cd5022079224ebd6df133da08d333aae07b4e691a9b11fa" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.349707 4751 scope.go:117] "RemoveContainer" containerID="653c6822da8dfb62b9974deaabbf6807b6ceb59b253232e41aa972ac9d77b452" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.381954 4751 scope.go:117] "RemoveContainer" containerID="2c32df1a18d1df9fb91c33d8041010429300b75cb742162ff675699f4b703b35" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.406592 4751 scope.go:117] "RemoveContainer" containerID="801374aeb4ac1cff7c0c384bd6f348009c3a008674d2c7a597e16dd316c97dcd" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.432869 4751 scope.go:117] "RemoveContainer" containerID="1417d08c74e8789435dc5b0b0ef29190de93021ca824a689f4696bce6b1679a8" Jan 30 21:49:26 crc kubenswrapper[4751]: I0130 21:49:26.462976 4751 scope.go:117] "RemoveContainer" containerID="2a0909f318a30556974662d8829ef78a359e73d89a596474535f309a8b496094" Jan 30 21:49:54 crc kubenswrapper[4751]: I0130 21:49:54.126634 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:49:54 crc kubenswrapper[4751]: I0130 21:49:54.127597 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:49:59 crc kubenswrapper[4751]: I0130 21:49:59.049960 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:49:59 crc kubenswrapper[4751]: I0130 21:49:59.061902 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-47sz5"] Jan 30 21:49:59 crc kubenswrapper[4751]: I0130 21:49:59.990895 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551aecfb-7969-4644-ac50-b8f4c63002d3" path="/var/lib/kubelet/pods/551aecfb-7969-4644-ac50-b8f4c63002d3/volumes" Jan 30 21:50:19 crc kubenswrapper[4751]: I0130 21:50:19.046390 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:50:19 crc kubenswrapper[4751]: I0130 21:50:19.057216 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0010-account-create-update-t2pkp"] Jan 30 21:50:19 crc kubenswrapper[4751]: I0130 21:50:19.991580 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb55a4f-933c-4871-b2d4-aed75e1449d7" path="/var/lib/kubelet/pods/8eb55a4f-933c-4871-b2d4-aed75e1449d7/volumes" Jan 30 21:50:20 crc kubenswrapper[4751]: I0130 21:50:20.030072 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:50:20 crc kubenswrapper[4751]: I0130 21:50:20.040919 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-vm8dd"] Jan 30 21:50:21 crc kubenswrapper[4751]: I0130 21:50:21.991108 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f243fc38-73c3-44ef-98b1-8c3086761087" path="/var/lib/kubelet/pods/f243fc38-73c3-44ef-98b1-8c3086761087/volumes" Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.037452 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.048310 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x464h"] Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.126760 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:24 crc kubenswrapper[4751]: I0130 21:50:24.126812 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:25 crc kubenswrapper[4751]: I0130 21:50:25.991160 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd" path="/var/lib/kubelet/pods/0dbeec54-59ed-4ab1-8faf-2e29cc5f90bd/volumes" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.754214 4751 scope.go:117] "RemoveContainer" containerID="c7bbfed7681d291cdba9800f3f96dcb721e8fd4853af323f0d29ccee985d7e37" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.788724 4751 scope.go:117] "RemoveContainer" containerID="33e12e7a910a881a922ed171c1d2a5e92dc23378252c88a8cc488f46dcc7cd9c" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.894819 4751 scope.go:117] "RemoveContainer" containerID="6d49b61e92e6eef2d8083686a2afeb4d6ae7d468f3b7fa9aa7d17b2c30415daf" Jan 30 21:50:26 crc kubenswrapper[4751]: I0130 21:50:26.960319 4751 scope.go:117] "RemoveContainer" containerID="17690b46bb105b4071eb9244efb55112436407df788ac66de199405e58cab561" Jan 30 21:50:31 crc kubenswrapper[4751]: I0130 21:50:31.062166 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:50:31 crc kubenswrapper[4751]: I0130 21:50:31.079934 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-php6q"] Jan 30 21:50:31 crc kubenswrapper[4751]: I0130 21:50:31.998743 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e675971e-ba0e-4630-bc1b-bdf47a433dd7" path="/var/lib/kubelet/pods/e675971e-ba0e-4630-bc1b-bdf47a433dd7/volumes" Jan 30 21:50:33 crc kubenswrapper[4751]: I0130 21:50:33.844885 4751 generic.go:334] "Generic (PLEG): container finished" podID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerID="c30ac0e8be6c0815dba51db8d797c69e9ca710b0af1c400a4c5b3a4c953e6ebf" exitCode=0 Jan 30 21:50:33 crc kubenswrapper[4751]: I0130 21:50:33.844963 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerDied","Data":"c30ac0e8be6c0815dba51db8d797c69e9ca710b0af1c400a4c5b3a4c953e6ebf"} Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.407557 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.519719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") pod \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.519971 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") pod \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.520034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") pod \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\" (UID: \"b45d4d88-6b91-4bfc-9619-68fdb7d90f05\") " Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.525624 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d" (OuterVolumeSpecName: "kube-api-access-6wz8d") pod "b45d4d88-6b91-4bfc-9619-68fdb7d90f05" (UID: "b45d4d88-6b91-4bfc-9619-68fdb7d90f05"). InnerVolumeSpecName "kube-api-access-6wz8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.567181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b45d4d88-6b91-4bfc-9619-68fdb7d90f05" (UID: "b45d4d88-6b91-4bfc-9619-68fdb7d90f05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.582414 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory" (OuterVolumeSpecName: "inventory") pod "b45d4d88-6b91-4bfc-9619-68fdb7d90f05" (UID: "b45d4d88-6b91-4bfc-9619-68fdb7d90f05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.623216 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wz8d\" (UniqueName: \"kubernetes.io/projected/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-kube-api-access-6wz8d\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.623795 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.623928 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b45d4d88-6b91-4bfc-9619-68fdb7d90f05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.869191 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" event={"ID":"b45d4d88-6b91-4bfc-9619-68fdb7d90f05","Type":"ContainerDied","Data":"871112c40c834fc0ee43096e7b64ddfc12d71cae78e7a64ab8d6c06bfe6ebb40"} Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.869498 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="871112c40c834fc0ee43096e7b64ddfc12d71cae78e7a64ab8d6c06bfe6ebb40" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.869291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-czw8p" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.967912 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz"] Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968411 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968429 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968440 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968447 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968459 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968465 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968473 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968479 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968488 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968495 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968509 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968514 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968530 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968536 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968550 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968583 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968590 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-content" Jan 30 21:50:35 crc kubenswrapper[4751]: E0130 21:50:35.968608 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968613 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="extract-utilities" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968820 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3c4709-cf61-4ba5-a7d0-b43c4f7c0813" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968841 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="329634a4-1673-4657-a0fb-bbf17bfc55c7" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968859 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="602e9b0f-e15c-4855-a0e5-942f2f37f030" containerName="registry-server" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.968877 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45d4d88-6b91-4bfc-9619-68fdb7d90f05" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.969913 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.979461 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.979754 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.979898 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:50:35 crc kubenswrapper[4751]: I0130 21:50:35.980068 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.015357 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz"] Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.138190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.138276 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.138346 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.241292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.241437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.241502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.246040 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.252411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.258419 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.313836 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:50:36 crc kubenswrapper[4751]: I0130 21:50:36.887716 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz"] Jan 30 21:50:37 crc kubenswrapper[4751]: I0130 21:50:37.897910 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerStarted","Data":"a7c779b234512f8b77468952ed7db747aa6f4d564b951f04213d13fdfbc73d63"} Jan 30 21:50:37 crc kubenswrapper[4751]: I0130 21:50:37.898143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerStarted","Data":"8b16809b402a2b33c62fd6eb1118ebc1aaf20a76b782d78a45f03fb5b59e3144"} Jan 30 21:50:37 crc kubenswrapper[4751]: I0130 21:50:37.914294 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" podStartSLOduration=2.492257927 podStartE2EDuration="2.914259898s" podCreationTimestamp="2026-01-30 21:50:35 +0000 UTC" firstStartedPulling="2026-01-30 21:50:36.897544163 +0000 UTC m=+2175.643366832" lastFinishedPulling="2026-01-30 21:50:37.319546154 +0000 UTC m=+2176.065368803" observedRunningTime="2026-01-30 21:50:37.912694036 +0000 UTC m=+2176.658516695" watchObservedRunningTime="2026-01-30 21:50:37.914259898 +0000 UTC m=+2176.660082597" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.126676 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.129215 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.129387 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.130564 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:50:54 crc kubenswrapper[4751]: I0130 21:50:54.130720 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" gracePeriod=600 Jan 30 21:50:54 crc kubenswrapper[4751]: E0130 21:50:54.278106 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.075900 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" exitCode=0 Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.075969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9"} Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.076798 4751 scope.go:117] "RemoveContainer" containerID="e83ca35bd085af955b4b3e0476bcb9169304b85473995bcb3f76de779bdcffb0" Jan 30 21:50:55 crc kubenswrapper[4751]: I0130 21:50:55.078080 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:50:55 crc kubenswrapper[4751]: E0130 21:50:55.078627 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:06 crc kubenswrapper[4751]: I0130 21:51:06.977177 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:06 crc kubenswrapper[4751]: E0130 21:51:06.979146 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:12 crc kubenswrapper[4751]: I0130 21:51:12.048348 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:51:12 crc kubenswrapper[4751]: I0130 21:51:12.059591 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-cxj8k"] Jan 30 21:51:13 crc kubenswrapper[4751]: I0130 21:51:13.994731 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ec060c-3c30-41e4-946c-7fb4584c7e85" path="/var/lib/kubelet/pods/97ec060c-3c30-41e4-946c-7fb4584c7e85/volumes" Jan 30 21:51:21 crc kubenswrapper[4751]: I0130 21:51:21.997972 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:22 crc kubenswrapper[4751]: E0130 21:51:22.000828 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:27 crc kubenswrapper[4751]: I0130 21:51:27.118051 4751 scope.go:117] "RemoveContainer" containerID="15b7f342e1abdf85738c2010d50dd3bc6a8ad893d7ecab47d753b5a1b032305d" Jan 30 21:51:27 crc kubenswrapper[4751]: I0130 21:51:27.161598 4751 scope.go:117] "RemoveContainer" containerID="d9cc3235ea6a465f2a125270f4c9765fed925e13c4baa2e715494daa6238d33f" Jan 30 21:51:36 crc kubenswrapper[4751]: I0130 21:51:36.976069 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:36 crc kubenswrapper[4751]: E0130 21:51:36.977162 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:40 crc kubenswrapper[4751]: I0130 21:51:40.543269 4751 generic.go:334] "Generic (PLEG): container finished" podID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerID="a7c779b234512f8b77468952ed7db747aa6f4d564b951f04213d13fdfbc73d63" exitCode=0 Jan 30 21:51:40 crc kubenswrapper[4751]: I0130 21:51:40.543433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerDied","Data":"a7c779b234512f8b77468952ed7db747aa6f4d564b951f04213d13fdfbc73d63"} Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.053690 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.109905 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") pod \"a21b5781-ce12-434c-9f38-47bf5f6ad332\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.110046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") pod \"a21b5781-ce12-434c-9f38-47bf5f6ad332\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.110921 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") pod \"a21b5781-ce12-434c-9f38-47bf5f6ad332\" (UID: \"a21b5781-ce12-434c-9f38-47bf5f6ad332\") " Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.128602 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw" (OuterVolumeSpecName: "kube-api-access-gv8xw") pod "a21b5781-ce12-434c-9f38-47bf5f6ad332" (UID: "a21b5781-ce12-434c-9f38-47bf5f6ad332"). InnerVolumeSpecName "kube-api-access-gv8xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.143467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory" (OuterVolumeSpecName: "inventory") pod "a21b5781-ce12-434c-9f38-47bf5f6ad332" (UID: "a21b5781-ce12-434c-9f38-47bf5f6ad332"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.175552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a21b5781-ce12-434c-9f38-47bf5f6ad332" (UID: "a21b5781-ce12-434c-9f38-47bf5f6ad332"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.214444 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.214482 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8xw\" (UniqueName: \"kubernetes.io/projected/a21b5781-ce12-434c-9f38-47bf5f6ad332-kube-api-access-gv8xw\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.214500 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a21b5781-ce12-434c-9f38-47bf5f6ad332-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.576702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" event={"ID":"a21b5781-ce12-434c-9f38-47bf5f6ad332","Type":"ContainerDied","Data":"8b16809b402a2b33c62fd6eb1118ebc1aaf20a76b782d78a45f03fb5b59e3144"} Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.577018 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b16809b402a2b33c62fd6eb1118ebc1aaf20a76b782d78a45f03fb5b59e3144" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.577116 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.666533 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5"] Jan 30 21:51:42 crc kubenswrapper[4751]: E0130 21:51:42.667316 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.667435 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.667816 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a21b5781-ce12-434c-9f38-47bf5f6ad332" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.669244 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675142 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675465 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675630 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.675858 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.676755 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5"] Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.726710 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.727310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.727477 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.829145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.829197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.829372 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.833788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.834161 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.846890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-plxr5\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:42 crc kubenswrapper[4751]: I0130 21:51:42.996475 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:43 crc kubenswrapper[4751]: I0130 21:51:43.532834 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5"] Jan 30 21:51:43 crc kubenswrapper[4751]: I0130 21:51:43.589418 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerStarted","Data":"e219e09b3c857b7e7aa40769b83e0190b3c1bc2841c1fcd0227b1672de545324"} Jan 30 21:51:44 crc kubenswrapper[4751]: I0130 21:51:44.605484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerStarted","Data":"cb8ae5e2affef33560219351fbe7944b686569dcc42ec07890013c934f74a73a"} Jan 30 21:51:44 crc kubenswrapper[4751]: I0130 21:51:44.634589 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" podStartSLOduration=2.188709216 podStartE2EDuration="2.634545891s" podCreationTimestamp="2026-01-30 21:51:42 +0000 UTC" firstStartedPulling="2026-01-30 21:51:43.536136721 +0000 UTC m=+2242.281959390" lastFinishedPulling="2026-01-30 21:51:43.981973416 +0000 UTC m=+2242.727796065" observedRunningTime="2026-01-30 21:51:44.624152428 +0000 UTC m=+2243.369975077" watchObservedRunningTime="2026-01-30 21:51:44.634545891 +0000 UTC m=+2243.380368540" Jan 30 21:51:47 crc kubenswrapper[4751]: I0130 21:51:47.976866 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:51:47 crc kubenswrapper[4751]: E0130 21:51:47.977814 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:51:48 crc kubenswrapper[4751]: I0130 21:51:48.648957 4751 generic.go:334] "Generic (PLEG): container finished" podID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerID="cb8ae5e2affef33560219351fbe7944b686569dcc42ec07890013c934f74a73a" exitCode=0 Jan 30 21:51:48 crc kubenswrapper[4751]: I0130 21:51:48.649026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerDied","Data":"cb8ae5e2affef33560219351fbe7944b686569dcc42ec07890013c934f74a73a"} Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.141211 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.229034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") pod \"538f9f69-1642-4944-a5e1-7348a104c5e6\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.229206 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") pod \"538f9f69-1642-4944-a5e1-7348a104c5e6\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.229451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") pod \"538f9f69-1642-4944-a5e1-7348a104c5e6\" (UID: \"538f9f69-1642-4944-a5e1-7348a104c5e6\") " Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.238406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68" (OuterVolumeSpecName: "kube-api-access-knq68") pod "538f9f69-1642-4944-a5e1-7348a104c5e6" (UID: "538f9f69-1642-4944-a5e1-7348a104c5e6"). InnerVolumeSpecName "kube-api-access-knq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.265213 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory" (OuterVolumeSpecName: "inventory") pod "538f9f69-1642-4944-a5e1-7348a104c5e6" (UID: "538f9f69-1642-4944-a5e1-7348a104c5e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.288516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "538f9f69-1642-4944-a5e1-7348a104c5e6" (UID: "538f9f69-1642-4944-a5e1-7348a104c5e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.332471 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.332512 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/538f9f69-1642-4944-a5e1-7348a104c5e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.332522 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knq68\" (UniqueName: \"kubernetes.io/projected/538f9f69-1642-4944-a5e1-7348a104c5e6-kube-api-access-knq68\") on node \"crc\" DevicePath \"\"" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.674735 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" event={"ID":"538f9f69-1642-4944-a5e1-7348a104c5e6","Type":"ContainerDied","Data":"e219e09b3c857b7e7aa40769b83e0190b3c1bc2841c1fcd0227b1672de545324"} Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.674799 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e219e09b3c857b7e7aa40769b83e0190b3c1bc2841c1fcd0227b1672de545324" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.674837 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-plxr5" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.763738 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7"] Jan 30 21:51:50 crc kubenswrapper[4751]: E0130 21:51:50.764264 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.764284 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.764595 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="538f9f69-1642-4944-a5e1-7348a104c5e6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.765449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.777812 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.778010 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.778015 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.778157 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.797499 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7"] Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.846852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.847217 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.847313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.949023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.949179 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.949265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.953112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.953669 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:50 crc kubenswrapper[4751]: I0130 21:51:50.966985 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-trgr7\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:51 crc kubenswrapper[4751]: I0130 21:51:51.095581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:51:51 crc kubenswrapper[4751]: I0130 21:51:51.663986 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7"] Jan 30 21:51:51 crc kubenswrapper[4751]: I0130 21:51:51.684361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerStarted","Data":"a9f316a0cd1ae67e29d2d85363c55f6874baca6a681b729bfa02385b2545da1a"} Jan 30 21:51:52 crc kubenswrapper[4751]: I0130 21:51:52.697514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerStarted","Data":"367fd3667550a5d0d4cc5ab26c8f9f154492a6ce72190b8a861ed642c787870f"} Jan 30 21:51:52 crc kubenswrapper[4751]: I0130 21:51:52.726796 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" podStartSLOduration=2.289203623 podStartE2EDuration="2.726775993s" podCreationTimestamp="2026-01-30 21:51:50 +0000 UTC" firstStartedPulling="2026-01-30 21:51:51.666383529 +0000 UTC m=+2250.412206188" lastFinishedPulling="2026-01-30 21:51:52.103955909 +0000 UTC m=+2250.849778558" observedRunningTime="2026-01-30 21:51:52.721004366 +0000 UTC m=+2251.466827025" watchObservedRunningTime="2026-01-30 21:51:52.726775993 +0000 UTC m=+2251.472598652" Jan 30 21:52:00 crc kubenswrapper[4751]: I0130 21:52:00.976352 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:00 crc kubenswrapper[4751]: E0130 21:52:00.977259 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:11 crc kubenswrapper[4751]: I0130 21:52:11.986260 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:11 crc kubenswrapper[4751]: E0130 21:52:11.987260 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.412122 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.415991 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.423824 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.524080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.524312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.524512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627289 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.627931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.628413 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.651906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"redhat-marketplace-7xkmv\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:21 crc kubenswrapper[4751]: I0130 21:52:21.780161 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:22 crc kubenswrapper[4751]: I0130 21:52:22.335516 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:22 crc kubenswrapper[4751]: I0130 21:52:22.975490 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:22 crc kubenswrapper[4751]: E0130 21:52:22.976084 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:23 crc kubenswrapper[4751]: I0130 21:52:23.015132 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" exitCode=0 Jan 30 21:52:23 crc kubenswrapper[4751]: I0130 21:52:23.015186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d"} Jan 30 21:52:23 crc kubenswrapper[4751]: I0130 21:52:23.015216 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerStarted","Data":"09ba0b000866b341461be8fd8aa4a6998ab86161b69ce1d54a289f04c444e094"} Jan 30 21:52:25 crc kubenswrapper[4751]: I0130 21:52:25.047945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerStarted","Data":"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac"} Jan 30 21:52:25 crc kubenswrapper[4751]: I0130 21:52:25.054843 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerID="367fd3667550a5d0d4cc5ab26c8f9f154492a6ce72190b8a861ed642c787870f" exitCode=0 Jan 30 21:52:25 crc kubenswrapper[4751]: I0130 21:52:25.054881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerDied","Data":"367fd3667550a5d0d4cc5ab26c8f9f154492a6ce72190b8a861ed642c787870f"} Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.065874 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" exitCode=0 Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.065948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac"} Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.523052 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.643852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") pod \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.644086 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") pod \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.644222 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") pod \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\" (UID: \"aa80e137-3a03-4857-9ec0-aa2f9b58df0d\") " Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.650690 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4" (OuterVolumeSpecName: "kube-api-access-d2ct4") pod "aa80e137-3a03-4857-9ec0-aa2f9b58df0d" (UID: "aa80e137-3a03-4857-9ec0-aa2f9b58df0d"). InnerVolumeSpecName "kube-api-access-d2ct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.676876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory" (OuterVolumeSpecName: "inventory") pod "aa80e137-3a03-4857-9ec0-aa2f9b58df0d" (UID: "aa80e137-3a03-4857-9ec0-aa2f9b58df0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.686027 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa80e137-3a03-4857-9ec0-aa2f9b58df0d" (UID: "aa80e137-3a03-4857-9ec0-aa2f9b58df0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.746508 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.746542 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2ct4\" (UniqueName: \"kubernetes.io/projected/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-kube-api-access-d2ct4\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:26 crc kubenswrapper[4751]: I0130 21:52:26.746554 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa80e137-3a03-4857-9ec0-aa2f9b58df0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.077442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerStarted","Data":"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562"} Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.079162 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" event={"ID":"aa80e137-3a03-4857-9ec0-aa2f9b58df0d","Type":"ContainerDied","Data":"a9f316a0cd1ae67e29d2d85363c55f6874baca6a681b729bfa02385b2545da1a"} Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.079192 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f316a0cd1ae67e29d2d85363c55f6874baca6a681b729bfa02385b2545da1a" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.079249 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-trgr7" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.124295 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xkmv" podStartSLOduration=2.583815433 podStartE2EDuration="6.124272867s" podCreationTimestamp="2026-01-30 21:52:21 +0000 UTC" firstStartedPulling="2026-01-30 21:52:23.017068139 +0000 UTC m=+2281.762890788" lastFinishedPulling="2026-01-30 21:52:26.557525573 +0000 UTC m=+2285.303348222" observedRunningTime="2026-01-30 21:52:27.109145284 +0000 UTC m=+2285.854967963" watchObservedRunningTime="2026-01-30 21:52:27.124272867 +0000 UTC m=+2285.870095526" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.181235 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2"] Jan 30 21:52:27 crc kubenswrapper[4751]: E0130 21:52:27.181737 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.181753 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.182256 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa80e137-3a03-4857-9ec0-aa2f9b58df0d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.183066 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186012 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186473 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.186598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.194734 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2"] Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.258313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.258858 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.259252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.361242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.361456 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.361489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.366401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.367728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.385678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-czgz2\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:27 crc kubenswrapper[4751]: I0130 21:52:27.541436 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:52:28 crc kubenswrapper[4751]: I0130 21:52:28.106522 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2"] Jan 30 21:52:28 crc kubenswrapper[4751]: W0130 21:52:28.110031 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c3c6d6_5ce3_4522_acc1_1ebbe5748f0d.slice/crio-5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072 WatchSource:0}: Error finding container 5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072: Status 404 returned error can't find the container with id 5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072 Jan 30 21:52:29 crc kubenswrapper[4751]: I0130 21:52:29.100754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerStarted","Data":"5c41e9203b76ddcd7439688158ffdc87678069a17d135cc32112f263aa9ea41d"} Jan 30 21:52:29 crc kubenswrapper[4751]: I0130 21:52:29.101526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerStarted","Data":"5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072"} Jan 30 21:52:29 crc kubenswrapper[4751]: I0130 21:52:29.129946 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" podStartSLOduration=1.728315925 podStartE2EDuration="2.129890882s" podCreationTimestamp="2026-01-30 21:52:27 +0000 UTC" firstStartedPulling="2026-01-30 21:52:28.11174171 +0000 UTC m=+2286.857564369" lastFinishedPulling="2026-01-30 21:52:28.513316677 +0000 UTC m=+2287.259139326" observedRunningTime="2026-01-30 21:52:29.118787539 +0000 UTC m=+2287.864610198" watchObservedRunningTime="2026-01-30 21:52:29.129890882 +0000 UTC m=+2287.875713541" Jan 30 21:52:31 crc kubenswrapper[4751]: I0130 21:52:31.781239 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:31 crc kubenswrapper[4751]: I0130 21:52:31.781918 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:31 crc kubenswrapper[4751]: I0130 21:52:31.854287 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:32 crc kubenswrapper[4751]: I0130 21:52:32.196531 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:32 crc kubenswrapper[4751]: I0130 21:52:32.247226 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.159909 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xkmv" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" containerID="cri-o://aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" gracePeriod=2 Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.683799 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.745165 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") pod \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.745226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") pod \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.745360 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") pod \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\" (UID: \"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60\") " Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.746462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities" (OuterVolumeSpecName: "utilities") pod "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" (UID: "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.755105 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m" (OuterVolumeSpecName: "kube-api-access-tfs2m") pod "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" (UID: "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60"). InnerVolumeSpecName "kube-api-access-tfs2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.781462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" (UID: "7b69623d-0f4c-4ac8-b36d-bd431d5aeb60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.848671 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.848707 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfs2m\" (UniqueName: \"kubernetes.io/projected/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-kube-api-access-tfs2m\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:34 crc kubenswrapper[4751]: I0130 21:52:34.848720 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.171995 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" exitCode=0 Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172050 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xkmv" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562"} Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xkmv" event={"ID":"7b69623d-0f4c-4ac8-b36d-bd431d5aeb60","Type":"ContainerDied","Data":"09ba0b000866b341461be8fd8aa4a6998ab86161b69ce1d54a289f04c444e094"} Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.172180 4751 scope.go:117] "RemoveContainer" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.202414 4751 scope.go:117] "RemoveContainer" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.210784 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.220633 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xkmv"] Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.231993 4751 scope.go:117] "RemoveContainer" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.307297 4751 scope.go:117] "RemoveContainer" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" Jan 30 21:52:35 crc kubenswrapper[4751]: E0130 21:52:35.307723 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562\": container with ID starting with aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562 not found: ID does not exist" containerID="aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.307751 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562"} err="failed to get container status \"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562\": rpc error: code = NotFound desc = could not find container \"aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562\": container with ID starting with aff0cc54cff4119730216d0b5cde5e979c19e41f766a8033ede4c261939aa562 not found: ID does not exist" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.307769 4751 scope.go:117] "RemoveContainer" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" Jan 30 21:52:35 crc kubenswrapper[4751]: E0130 21:52:35.308151 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac\": container with ID starting with 2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac not found: ID does not exist" containerID="2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.308175 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac"} err="failed to get container status \"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac\": rpc error: code = NotFound desc = could not find container \"2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac\": container with ID starting with 2525c163047c82fc8ed0efeed7c11591a753132791c4d3b9128d49e646a268ac not found: ID does not exist" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.308189 4751 scope.go:117] "RemoveContainer" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" Jan 30 21:52:35 crc kubenswrapper[4751]: E0130 21:52:35.308491 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d\": container with ID starting with b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d not found: ID does not exist" containerID="b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.308538 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d"} err="failed to get container status \"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d\": rpc error: code = NotFound desc = could not find container \"b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d\": container with ID starting with b131183dd70f412f27fa37decba685afe22b38b2dc2ea3b2dbed009285fff14d not found: ID does not exist" Jan 30 21:52:35 crc kubenswrapper[4751]: I0130 21:52:35.994207 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" path="/var/lib/kubelet/pods/7b69623d-0f4c-4ac8-b36d-bd431d5aeb60/volumes" Jan 30 21:52:36 crc kubenswrapper[4751]: I0130 21:52:36.976462 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:36 crc kubenswrapper[4751]: E0130 21:52:36.976967 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:52:51 crc kubenswrapper[4751]: I0130 21:52:51.985266 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:52:51 crc kubenswrapper[4751]: E0130 21:52:51.986394 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:06 crc kubenswrapper[4751]: I0130 21:53:06.976394 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:06 crc kubenswrapper[4751]: E0130 21:53:06.977388 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:11 crc kubenswrapper[4751]: I0130 21:53:11.594410 4751 generic.go:334] "Generic (PLEG): container finished" podID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerID="5c41e9203b76ddcd7439688158ffdc87678069a17d135cc32112f263aa9ea41d" exitCode=0 Jan 30 21:53:11 crc kubenswrapper[4751]: I0130 21:53:11.594501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerDied","Data":"5c41e9203b76ddcd7439688158ffdc87678069a17d135cc32112f263aa9ea41d"} Jan 30 21:53:12 crc kubenswrapper[4751]: I0130 21:53:12.050611 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:53:12 crc kubenswrapper[4751]: I0130 21:53:12.065359 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-xw5xf"] Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.196031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.389681 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") pod \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.390753 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") pod \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.390797 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") pod \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\" (UID: \"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d\") " Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.395872 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4" (OuterVolumeSpecName: "kube-api-access-vdbt4") pod "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" (UID: "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d"). InnerVolumeSpecName "kube-api-access-vdbt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.422856 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory" (OuterVolumeSpecName: "inventory") pod "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" (UID: "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.447624 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" (UID: "39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.493759 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdbt4\" (UniqueName: \"kubernetes.io/projected/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-kube-api-access-vdbt4\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.493817 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.493832 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.614556 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" event={"ID":"39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d","Type":"ContainerDied","Data":"5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072"} Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.614597 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7854022bb4b1a27210ed4d576703ccdd028b1c681e62b72656a04ce9d99072" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.614687 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-czgz2" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.761782 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gtfdq"] Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762497 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762521 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762560 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-utilities" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762570 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-utilities" Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762587 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-content" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762595 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="extract-content" Jan 30 21:53:13 crc kubenswrapper[4751]: E0130 21:53:13.762606 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.762614 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.763085 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b69623d-0f4c-4ac8-b36d-bd431d5aeb60" containerName="registry-server" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.763141 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.764246 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.766515 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.766829 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.767351 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.767588 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.774631 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gtfdq"] Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.904660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.905880 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:13 crc kubenswrapper[4751]: I0130 21:53:13.906124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.041401 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.041560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.041716 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.046366 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b928b3-101e-4649-ae57-9857145062f0" path="/var/lib/kubelet/pods/27b928b3-101e-4649-ae57-9857145062f0/volumes" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.047800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.047812 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.065222 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"ssh-known-hosts-edpm-deployment-gtfdq\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.085305 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.696594 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 21:53:14 crc kubenswrapper[4751]: I0130 21:53:14.706057 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gtfdq"] Jan 30 21:53:15 crc kubenswrapper[4751]: I0130 21:53:15.635447 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerStarted","Data":"7b7c6addc2724d8025fc8ecb6877f8749ca44ae396ec2d88ec6aa78682935da0"} Jan 30 21:53:15 crc kubenswrapper[4751]: I0130 21:53:15.636094 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerStarted","Data":"2c8a69c39dae650df7fb0266ab7ac5aeadc94ff22eeb9c5ce3e0af1e9323b23a"} Jan 30 21:53:15 crc kubenswrapper[4751]: I0130 21:53:15.686932 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" podStartSLOduration=2.065445519 podStartE2EDuration="2.686915456s" podCreationTimestamp="2026-01-30 21:53:13 +0000 UTC" firstStartedPulling="2026-01-30 21:53:14.696314787 +0000 UTC m=+2333.442137436" lastFinishedPulling="2026-01-30 21:53:15.317784724 +0000 UTC m=+2334.063607373" observedRunningTime="2026-01-30 21:53:15.683463972 +0000 UTC m=+2334.429286621" watchObservedRunningTime="2026-01-30 21:53:15.686915456 +0000 UTC m=+2334.432738105" Jan 30 21:53:21 crc kubenswrapper[4751]: I0130 21:53:21.986111 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:21 crc kubenswrapper[4751]: E0130 21:53:21.986833 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:22 crc kubenswrapper[4751]: I0130 21:53:22.705722 4751 generic.go:334] "Generic (PLEG): container finished" podID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerID="7b7c6addc2724d8025fc8ecb6877f8749ca44ae396ec2d88ec6aa78682935da0" exitCode=0 Jan 30 21:53:22 crc kubenswrapper[4751]: I0130 21:53:22.705794 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerDied","Data":"7b7c6addc2724d8025fc8ecb6877f8749ca44ae396ec2d88ec6aa78682935da0"} Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.231484 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.397784 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") pod \"1c9c26ff-407a-4595-8406-e3a0d46450aa\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.398022 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") pod \"1c9c26ff-407a-4595-8406-e3a0d46450aa\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.398116 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") pod \"1c9c26ff-407a-4595-8406-e3a0d46450aa\" (UID: \"1c9c26ff-407a-4595-8406-e3a0d46450aa\") " Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.407727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n" (OuterVolumeSpecName: "kube-api-access-p5g7n") pod "1c9c26ff-407a-4595-8406-e3a0d46450aa" (UID: "1c9c26ff-407a-4595-8406-e3a0d46450aa"). InnerVolumeSpecName "kube-api-access-p5g7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.474854 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c9c26ff-407a-4595-8406-e3a0d46450aa" (UID: "1c9c26ff-407a-4595-8406-e3a0d46450aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.476181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1c9c26ff-407a-4595-8406-e3a0d46450aa" (UID: "1c9c26ff-407a-4595-8406-e3a0d46450aa"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.503905 4751 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.503973 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c9c26ff-407a-4595-8406-e3a0d46450aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.503991 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5g7n\" (UniqueName: \"kubernetes.io/projected/1c9c26ff-407a-4595-8406-e3a0d46450aa-kube-api-access-p5g7n\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.729467 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" event={"ID":"1c9c26ff-407a-4595-8406-e3a0d46450aa","Type":"ContainerDied","Data":"2c8a69c39dae650df7fb0266ab7ac5aeadc94ff22eeb9c5ce3e0af1e9323b23a"} Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.729512 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8a69c39dae650df7fb0266ab7ac5aeadc94ff22eeb9c5ce3e0af1e9323b23a" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.729582 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gtfdq" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.836454 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg"] Jan 30 21:53:24 crc kubenswrapper[4751]: E0130 21:53:24.837062 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerName="ssh-known-hosts-edpm-deployment" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.837084 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerName="ssh-known-hosts-edpm-deployment" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.837372 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9c26ff-407a-4595-8406-e3a0d46450aa" containerName="ssh-known-hosts-edpm-deployment" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.838496 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.841140 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.852795 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.853061 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.853953 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:24 crc kubenswrapper[4751]: I0130 21:53:24.879448 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg"] Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.022789 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.023211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.023292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.131422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.131522 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.131646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.159107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.171832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.182904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zbttg\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.219095 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:25 crc kubenswrapper[4751]: I0130 21:53:25.778220 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg"] Jan 30 21:53:26 crc kubenswrapper[4751]: I0130 21:53:26.760759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerStarted","Data":"122679292cf1494c9eae6f6aa11d5714e9748f977c1cf888cae1244904e992f0"} Jan 30 21:53:27 crc kubenswrapper[4751]: I0130 21:53:27.311115 4751 scope.go:117] "RemoveContainer" containerID="74af78e3c804e6dbf95f30a1b6c4ba765fc8edb69bfb20dd7f1176259283a952" Jan 30 21:53:27 crc kubenswrapper[4751]: I0130 21:53:27.770566 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerStarted","Data":"0aaf8c7f914e1332e2b97e1f8cbfc4564336f0832a85b0e56b76e043fcaf10b9"} Jan 30 21:53:27 crc kubenswrapper[4751]: I0130 21:53:27.802142 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" podStartSLOduration=3.109678883 podStartE2EDuration="3.802121256s" podCreationTimestamp="2026-01-30 21:53:24 +0000 UTC" firstStartedPulling="2026-01-30 21:53:25.775311783 +0000 UTC m=+2344.521134432" lastFinishedPulling="2026-01-30 21:53:26.467754146 +0000 UTC m=+2345.213576805" observedRunningTime="2026-01-30 21:53:27.799588967 +0000 UTC m=+2346.545411626" watchObservedRunningTime="2026-01-30 21:53:27.802121256 +0000 UTC m=+2346.547943905" Jan 30 21:53:34 crc kubenswrapper[4751]: I0130 21:53:34.842670 4751 generic.go:334] "Generic (PLEG): container finished" podID="10f27009-b34c-43f0-999f-64c2e2316013" containerID="0aaf8c7f914e1332e2b97e1f8cbfc4564336f0832a85b0e56b76e043fcaf10b9" exitCode=0 Jan 30 21:53:34 crc kubenswrapper[4751]: I0130 21:53:34.842913 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerDied","Data":"0aaf8c7f914e1332e2b97e1f8cbfc4564336f0832a85b0e56b76e043fcaf10b9"} Jan 30 21:53:34 crc kubenswrapper[4751]: I0130 21:53:34.975656 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:34 crc kubenswrapper[4751]: E0130 21:53:34.975952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.394671 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.518693 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") pod \"10f27009-b34c-43f0-999f-64c2e2316013\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.518753 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") pod \"10f27009-b34c-43f0-999f-64c2e2316013\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.518909 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") pod \"10f27009-b34c-43f0-999f-64c2e2316013\" (UID: \"10f27009-b34c-43f0-999f-64c2e2316013\") " Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.524892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv" (OuterVolumeSpecName: "kube-api-access-8ttsv") pod "10f27009-b34c-43f0-999f-64c2e2316013" (UID: "10f27009-b34c-43f0-999f-64c2e2316013"). InnerVolumeSpecName "kube-api-access-8ttsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.554625 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10f27009-b34c-43f0-999f-64c2e2316013" (UID: "10f27009-b34c-43f0-999f-64c2e2316013"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.560186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory" (OuterVolumeSpecName: "inventory") pod "10f27009-b34c-43f0-999f-64c2e2316013" (UID: "10f27009-b34c-43f0-999f-64c2e2316013"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.621875 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.621923 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10f27009-b34c-43f0-999f-64c2e2316013-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.621937 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttsv\" (UniqueName: \"kubernetes.io/projected/10f27009-b34c-43f0-999f-64c2e2316013-kube-api-access-8ttsv\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.864361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" event={"ID":"10f27009-b34c-43f0-999f-64c2e2316013","Type":"ContainerDied","Data":"122679292cf1494c9eae6f6aa11d5714e9748f977c1cf888cae1244904e992f0"} Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.864716 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122679292cf1494c9eae6f6aa11d5714e9748f977c1cf888cae1244904e992f0" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.864774 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zbttg" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.964171 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp"] Jan 30 21:53:36 crc kubenswrapper[4751]: E0130 21:53:36.965222 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f27009-b34c-43f0-999f-64c2e2316013" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.965250 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f27009-b34c-43f0-999f-64c2e2316013" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.965591 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f27009-b34c-43f0-999f-64c2e2316013" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.966663 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.969779 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.970064 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.970253 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.970443 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:36 crc kubenswrapper[4751]: I0130 21:53:36.980947 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp"] Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.036517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.036593 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.036660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.139555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.139751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.139830 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.146455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.149832 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.160370 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.308768 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:37 crc kubenswrapper[4751]: I0130 21:53:37.912377 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp"] Jan 30 21:53:38 crc kubenswrapper[4751]: I0130 21:53:38.888878 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerStarted","Data":"bc7cd9d279a5b97e5c4074451d6efae3f0299f971bed67c8ea07ffc3c0342544"} Jan 30 21:53:38 crc kubenswrapper[4751]: I0130 21:53:38.889406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerStarted","Data":"3d5a4553e4e6aa626cca117c84e026f409e5c2220a8c3d21496a40d5224c6c8a"} Jan 30 21:53:38 crc kubenswrapper[4751]: I0130 21:53:38.915207 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" podStartSLOduration=2.482883207 podStartE2EDuration="2.915183392s" podCreationTimestamp="2026-01-30 21:53:36 +0000 UTC" firstStartedPulling="2026-01-30 21:53:37.921796017 +0000 UTC m=+2356.667618666" lastFinishedPulling="2026-01-30 21:53:38.354096202 +0000 UTC m=+2357.099918851" observedRunningTime="2026-01-30 21:53:38.906610427 +0000 UTC m=+2357.652433086" watchObservedRunningTime="2026-01-30 21:53:38.915183392 +0000 UTC m=+2357.661006041" Jan 30 21:53:47 crc kubenswrapper[4751]: I0130 21:53:47.421433 4751 generic.go:334] "Generic (PLEG): container finished" podID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerID="bc7cd9d279a5b97e5c4074451d6efae3f0299f971bed67c8ea07ffc3c0342544" exitCode=0 Jan 30 21:53:47 crc kubenswrapper[4751]: I0130 21:53:47.421519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerDied","Data":"bc7cd9d279a5b97e5c4074451d6efae3f0299f971bed67c8ea07ffc3c0342544"} Jan 30 21:53:47 crc kubenswrapper[4751]: I0130 21:53:47.976297 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:53:47 crc kubenswrapper[4751]: E0130 21:53:47.976700 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:53:48 crc kubenswrapper[4751]: I0130 21:53:48.936975 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.127283 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") pod \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.127411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") pod \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.127451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") pod \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\" (UID: \"0562f716-fdf2-41ff-bb36-5474fa9be5c0\") " Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.133687 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c" (OuterVolumeSpecName: "kube-api-access-2fn2c") pod "0562f716-fdf2-41ff-bb36-5474fa9be5c0" (UID: "0562f716-fdf2-41ff-bb36-5474fa9be5c0"). InnerVolumeSpecName "kube-api-access-2fn2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.159262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0562f716-fdf2-41ff-bb36-5474fa9be5c0" (UID: "0562f716-fdf2-41ff-bb36-5474fa9be5c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.186310 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory" (OuterVolumeSpecName: "inventory") pod "0562f716-fdf2-41ff-bb36-5474fa9be5c0" (UID: "0562f716-fdf2-41ff-bb36-5474fa9be5c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.232087 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.232417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fn2c\" (UniqueName: \"kubernetes.io/projected/0562f716-fdf2-41ff-bb36-5474fa9be5c0-kube-api-access-2fn2c\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.232429 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0562f716-fdf2-41ff-bb36-5474fa9be5c0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.445313 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" event={"ID":"0562f716-fdf2-41ff-bb36-5474fa9be5c0","Type":"ContainerDied","Data":"3d5a4553e4e6aa626cca117c84e026f409e5c2220a8c3d21496a40d5224c6c8a"} Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.445382 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5a4553e4e6aa626cca117c84e026f409e5c2220a8c3d21496a40d5224c6c8a" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.445391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.553368 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d"] Jan 30 21:53:49 crc kubenswrapper[4751]: E0130 21:53:49.553882 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.553901 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.554169 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0562f716-fdf2-41ff-bb36-5474fa9be5c0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.555099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.560152 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561309 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561243 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561633 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561740 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561796 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561741 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.561939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.574075 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d"] Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.745902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.745970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746096 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746384 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746536 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746856 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.746915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747022 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747197 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.747215 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849670 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849739 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849870 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.849925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850088 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850156 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.850222 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.860357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.865191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.866521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.875293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876734 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.876985 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.877722 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.877835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.877929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.879710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.882879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.883352 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.883875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:49 crc kubenswrapper[4751]: I0130 21:53:49.884168 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:50 crc kubenswrapper[4751]: I0130 21:53:50.176505 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:53:50 crc kubenswrapper[4751]: I0130 21:53:50.797934 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d"] Jan 30 21:53:51 crc kubenswrapper[4751]: I0130 21:53:51.467226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerStarted","Data":"7a07c675c0f90c574d8f8f8ea8ef47927ddc4c08de96eca070ff2595e227a85a"} Jan 30 21:53:52 crc kubenswrapper[4751]: I0130 21:53:52.478402 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerStarted","Data":"672587b14b320f08f5e42b1ec7ff59ed42e125c4def3cc11b0df4a7d866c6afc"} Jan 30 21:53:52 crc kubenswrapper[4751]: I0130 21:53:52.506876 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" podStartSLOduration=2.40199318 podStartE2EDuration="3.506857458s" podCreationTimestamp="2026-01-30 21:53:49 +0000 UTC" firstStartedPulling="2026-01-30 21:53:50.814600384 +0000 UTC m=+2369.560423033" lastFinishedPulling="2026-01-30 21:53:51.919464662 +0000 UTC m=+2370.665287311" observedRunningTime="2026-01-30 21:53:52.499017124 +0000 UTC m=+2371.244839793" watchObservedRunningTime="2026-01-30 21:53:52.506857458 +0000 UTC m=+2371.252680107" Jan 30 21:53:59 crc kubenswrapper[4751]: I0130 21:53:59.055812 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:53:59 crc kubenswrapper[4751]: I0130 21:53:59.068095 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-q9ws6"] Jan 30 21:54:00 crc kubenswrapper[4751]: I0130 21:54:00.004896 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee22e47a-e31f-4d01-8eec-e4d24dbb02ca" path="/var/lib/kubelet/pods/ee22e47a-e31f-4d01-8eec-e4d24dbb02ca/volumes" Jan 30 21:54:02 crc kubenswrapper[4751]: I0130 21:54:02.975584 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:02 crc kubenswrapper[4751]: E0130 21:54:02.976438 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:15 crc kubenswrapper[4751]: I0130 21:54:15.980961 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:15 crc kubenswrapper[4751]: E0130 21:54:15.982198 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:27 crc kubenswrapper[4751]: I0130 21:54:27.395503 4751 scope.go:117] "RemoveContainer" containerID="e68cf53ba13bd45baafd16d7ceca811457154cd522453b22e57f6a2054d3b023" Jan 30 21:54:29 crc kubenswrapper[4751]: I0130 21:54:29.976959 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:29 crc kubenswrapper[4751]: E0130 21:54:29.978414 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:33 crc kubenswrapper[4751]: I0130 21:54:33.948517 4751 generic.go:334] "Generic (PLEG): container finished" podID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerID="672587b14b320f08f5e42b1ec7ff59ed42e125c4def3cc11b0df4a7d866c6afc" exitCode=0 Jan 30 21:54:33 crc kubenswrapper[4751]: I0130 21:54:33.948608 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerDied","Data":"672587b14b320f08f5e42b1ec7ff59ed42e125c4def3cc11b0df4a7d866c6afc"} Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.491602 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533637 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533739 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533776 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533882 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533918 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533945 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.533981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534106 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534166 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.534225 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") pod \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\" (UID: \"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f\") " Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.543230 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544696 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544811 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.544715 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.545070 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.546245 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.547915 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.548679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.548707 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.554096 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.554565 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.561148 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.570982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb" (OuterVolumeSpecName: "kube-api-access-lcpwb") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "kube-api-access-lcpwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.578917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.580168 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory" (OuterVolumeSpecName: "inventory") pod "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" (UID: "d9b249ee-25bd-4b25-aaaf-57c3a55dad1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646475 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646516 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646533 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646546 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646565 4751 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646580 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646597 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646610 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcpwb\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-kube-api-access-lcpwb\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646624 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646637 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646650 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646664 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646677 4751 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646694 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646707 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:35 crc kubenswrapper[4751]: I0130 21:54:35.646719 4751 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b249ee-25bd-4b25-aaaf-57c3a55dad1f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.015136 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.018157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d" event={"ID":"d9b249ee-25bd-4b25-aaaf-57c3a55dad1f","Type":"ContainerDied","Data":"7a07c675c0f90c574d8f8f8ea8ef47927ddc4c08de96eca070ff2595e227a85a"} Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.018212 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a07c675c0f90c574d8f8f8ea8ef47927ddc4c08de96eca070ff2595e227a85a" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.162609 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98"] Jan 30 21:54:36 crc kubenswrapper[4751]: E0130 21:54:36.163103 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.163120 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.163358 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b249ee-25bd-4b25-aaaf-57c3a55dad1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.165004 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171126 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171390 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171497 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171596 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.171706 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.200177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98"] Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261503 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261540 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.261590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364795 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.364880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.370313 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.371198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.371645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.372259 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.389810 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g7v98\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:36 crc kubenswrapper[4751]: I0130 21:54:36.495059 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:54:37 crc kubenswrapper[4751]: I0130 21:54:37.897753 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98"] Jan 30 21:54:37 crc kubenswrapper[4751]: W0130 21:54:37.899469 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43548d7f_01a0_4905_a26d_424ba948cbe8.slice/crio-49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1 WatchSource:0}: Error finding container 49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1: Status 404 returned error can't find the container with id 49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1 Jan 30 21:54:38 crc kubenswrapper[4751]: I0130 21:54:38.043443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerStarted","Data":"49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1"} Jan 30 21:54:39 crc kubenswrapper[4751]: I0130 21:54:39.054271 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerStarted","Data":"d63111cd905526ebb8600297ca72fc453aba6ae767a18220cb215df78ce120c4"} Jan 30 21:54:39 crc kubenswrapper[4751]: I0130 21:54:39.076422 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" podStartSLOduration=2.578880099 podStartE2EDuration="3.076403534s" podCreationTimestamp="2026-01-30 21:54:36 +0000 UTC" firstStartedPulling="2026-01-30 21:54:37.901862076 +0000 UTC m=+2416.647684725" lastFinishedPulling="2026-01-30 21:54:38.399385501 +0000 UTC m=+2417.145208160" observedRunningTime="2026-01-30 21:54:39.073754262 +0000 UTC m=+2417.819576911" watchObservedRunningTime="2026-01-30 21:54:39.076403534 +0000 UTC m=+2417.822226183" Jan 30 21:54:43 crc kubenswrapper[4751]: I0130 21:54:43.976396 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:43 crc kubenswrapper[4751]: E0130 21:54:43.977122 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:54:54 crc kubenswrapper[4751]: I0130 21:54:54.975929 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:54:54 crc kubenswrapper[4751]: E0130 21:54:54.976739 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:05 crc kubenswrapper[4751]: I0130 21:55:05.975670 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:05 crc kubenswrapper[4751]: E0130 21:55:05.976567 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:17 crc kubenswrapper[4751]: I0130 21:55:17.978362 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:17 crc kubenswrapper[4751]: E0130 21:55:17.980665 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:30 crc kubenswrapper[4751]: I0130 21:55:30.976990 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:30 crc kubenswrapper[4751]: E0130 21:55:30.977918 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:35 crc kubenswrapper[4751]: I0130 21:55:35.703124 4751 generic.go:334] "Generic (PLEG): container finished" podID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerID="d63111cd905526ebb8600297ca72fc453aba6ae767a18220cb215df78ce120c4" exitCode=0 Jan 30 21:55:35 crc kubenswrapper[4751]: I0130 21:55:35.703214 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerDied","Data":"d63111cd905526ebb8600297ca72fc453aba6ae767a18220cb215df78ce120c4"} Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.293583 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416312 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416398 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416427 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.416722 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") pod \"43548d7f-01a0-4905-a26d-424ba948cbe8\" (UID: \"43548d7f-01a0-4905-a26d-424ba948cbe8\") " Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.423984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.424168 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4" (OuterVolumeSpecName: "kube-api-access-vrfn4") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "kube-api-access-vrfn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.453863 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.483346 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.489683 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory" (OuterVolumeSpecName: "inventory") pod "43548d7f-01a0-4905-a26d-424ba948cbe8" (UID: "43548d7f-01a0-4905-a26d-424ba948cbe8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520180 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520222 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520236 4751 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/43548d7f-01a0-4905-a26d-424ba948cbe8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520250 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43548d7f-01a0-4905-a26d-424ba948cbe8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.520263 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfn4\" (UniqueName: \"kubernetes.io/projected/43548d7f-01a0-4905-a26d-424ba948cbe8-kube-api-access-vrfn4\") on node \"crc\" DevicePath \"\"" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.737617 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" event={"ID":"43548d7f-01a0-4905-a26d-424ba948cbe8","Type":"ContainerDied","Data":"49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1"} Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.737679 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49af737d44eab3ca3b2a29843774a9cbe165e71fec85b195d68cb91b4cf97ca1" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.737697 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g7v98" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.848308 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj"] Jan 30 21:55:37 crc kubenswrapper[4751]: E0130 21:55:37.849043 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.849078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.849458 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="43548d7f-01a0-4905-a26d-424ba948cbe8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.850418 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.857192 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.857205 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.864503 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj"] Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.869592 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.869840 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.870013 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.870141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934757 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934802 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.934894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:37 crc kubenswrapper[4751]: I0130 21:55:37.935032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037619 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.037765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.042924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.043862 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.044217 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.044278 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.045721 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.059282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.181176 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:55:38 crc kubenswrapper[4751]: I0130 21:55:38.743767 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj"] Jan 30 21:55:39 crc kubenswrapper[4751]: I0130 21:55:39.769413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerStarted","Data":"132f59865db02acd7bac90d85d3082a57a9cd620316d618a496f465ceb78253e"} Jan 30 21:55:39 crc kubenswrapper[4751]: I0130 21:55:39.770155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerStarted","Data":"e2cb7bf89450e8f13b0ebe7b4c045c23a0242d2fc08f918636cc9c91032b6b5c"} Jan 30 21:55:39 crc kubenswrapper[4751]: I0130 21:55:39.791402 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" podStartSLOduration=2.267633991 podStartE2EDuration="2.79137947s" podCreationTimestamp="2026-01-30 21:55:37 +0000 UTC" firstStartedPulling="2026-01-30 21:55:38.753475465 +0000 UTC m=+2477.499298114" lastFinishedPulling="2026-01-30 21:55:39.277220934 +0000 UTC m=+2478.023043593" observedRunningTime="2026-01-30 21:55:39.784184364 +0000 UTC m=+2478.530007013" watchObservedRunningTime="2026-01-30 21:55:39.79137947 +0000 UTC m=+2478.537202129" Jan 30 21:55:42 crc kubenswrapper[4751]: I0130 21:55:42.975764 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:42 crc kubenswrapper[4751]: E0130 21:55:42.976796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 21:55:55 crc kubenswrapper[4751]: I0130 21:55:55.976256 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 21:55:56 crc kubenswrapper[4751]: I0130 21:55:56.981723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19"} Jan 30 21:56:24 crc kubenswrapper[4751]: I0130 21:56:24.278537 4751 generic.go:334] "Generic (PLEG): container finished" podID="9d2edd75-7066-43c1-9636-149a176ee575" containerID="132f59865db02acd7bac90d85d3082a57a9cd620316d618a496f465ceb78253e" exitCode=0 Jan 30 21:56:24 crc kubenswrapper[4751]: I0130 21:56:24.278635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerDied","Data":"132f59865db02acd7bac90d85d3082a57a9cd620316d618a496f465ceb78253e"} Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.826757 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891179 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891412 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891445 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891496 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.891532 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") pod \"9d2edd75-7066-43c1-9636-149a176ee575\" (UID: \"9d2edd75-7066-43c1-9636-149a176ee575\") " Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.900986 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.903544 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5" (OuterVolumeSpecName: "kube-api-access-glmq5") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "kube-api-access-glmq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.929596 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.933469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.936888 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory" (OuterVolumeSpecName: "inventory") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.955934 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d2edd75-7066-43c1-9636-149a176ee575" (UID: "9d2edd75-7066-43c1-9636-149a176ee575"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994481 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994513 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glmq5\" (UniqueName: \"kubernetes.io/projected/9d2edd75-7066-43c1-9636-149a176ee575-kube-api-access-glmq5\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994526 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994538 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994551 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:25 crc kubenswrapper[4751]: I0130 21:56:25.994567 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9d2edd75-7066-43c1-9636-149a176ee575-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.301083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" event={"ID":"9d2edd75-7066-43c1-9636-149a176ee575","Type":"ContainerDied","Data":"e2cb7bf89450e8f13b0ebe7b4c045c23a0242d2fc08f918636cc9c91032b6b5c"} Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.301505 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2cb7bf89450e8f13b0ebe7b4c045c23a0242d2fc08f918636cc9c91032b6b5c" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.301133 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.399613 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f"] Jan 30 21:56:26 crc kubenswrapper[4751]: E0130 21:56:26.400166 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2edd75-7066-43c1-9636-149a176ee575" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.400192 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2edd75-7066-43c1-9636-149a176ee575" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.400483 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2edd75-7066-43c1-9636-149a176ee575" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.401549 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.405307 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.405444 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.405942 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.406407 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.407877 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.418043 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f"] Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.507478 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.507762 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.507944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.508207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.508296 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.610784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.610856 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.610900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.611021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.611070 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.619135 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.619873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.621129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.621897 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.641506 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9j62f\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:26 crc kubenswrapper[4751]: I0130 21:56:26.719772 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 21:56:27 crc kubenswrapper[4751]: I0130 21:56:27.293042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f"] Jan 30 21:56:27 crc kubenswrapper[4751]: I0130 21:56:27.316718 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerStarted","Data":"9a026eb729a34d4b477952f2632ed15c11d6b22037f08e83aabf1428e03de6e2"} Jan 30 21:56:28 crc kubenswrapper[4751]: I0130 21:56:28.327244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerStarted","Data":"e55a222a8cf3285625e24882cb0407f200684e42ff8e342e2e19480733bf455c"} Jan 30 21:56:28 crc kubenswrapper[4751]: I0130 21:56:28.356893 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" podStartSLOduration=1.933966357 podStartE2EDuration="2.35687434s" podCreationTimestamp="2026-01-30 21:56:26 +0000 UTC" firstStartedPulling="2026-01-30 21:56:27.305069974 +0000 UTC m=+2526.050892623" lastFinishedPulling="2026-01-30 21:56:27.727977947 +0000 UTC m=+2526.473800606" observedRunningTime="2026-01-30 21:56:28.348221134 +0000 UTC m=+2527.094043783" watchObservedRunningTime="2026-01-30 21:56:28.35687434 +0000 UTC m=+2527.102696989" Jan 30 21:58:24 crc kubenswrapper[4751]: I0130 21:58:24.126903 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:24 crc kubenswrapper[4751]: I0130 21:58:24.127526 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:58:54 crc kubenswrapper[4751]: I0130 21:58:54.126548 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:58:54 crc kubenswrapper[4751]: I0130 21:58:54.127050 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.126520 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.127128 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.127175 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.128259 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.128333 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19" gracePeriod=600 Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.695714 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19" exitCode=0 Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.695774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19"} Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.696403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0"} Jan 30 21:59:24 crc kubenswrapper[4751]: I0130 21:59:24.696432 4751 scope.go:117] "RemoveContainer" containerID="5cf46a77924b2097b8861473da86681318a4cb57006ae6434390e158d58ea5c9" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.145072 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.148672 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.153840 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.154041 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.170121 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.300174 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.300419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.300794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.403752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.404215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.404537 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.405139 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.411671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.420456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"collect-profiles-29496840-cv6z6\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.516193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:00 crc kubenswrapper[4751]: I0130 22:00:00.989376 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:00:01 crc kubenswrapper[4751]: I0130 22:00:01.103719 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" event={"ID":"87fd180d-a717-4e4f-92fc-e8e77f2d303c","Type":"ContainerStarted","Data":"cd852f618c14d77a9840e0cfcbd5ee8f8500a03a169a9c644de32c3791b5d569"} Jan 30 22:00:02 crc kubenswrapper[4751]: I0130 22:00:02.116348 4751 generic.go:334] "Generic (PLEG): container finished" podID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerID="8cb214ecc973d14bc0906a66a17ca4c95d3c39c0cada1250d2a736afa76d1aeb" exitCode=0 Jan 30 22:00:02 crc kubenswrapper[4751]: I0130 22:00:02.116426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" event={"ID":"87fd180d-a717-4e4f-92fc-e8e77f2d303c","Type":"ContainerDied","Data":"8cb214ecc973d14bc0906a66a17ca4c95d3c39c0cada1250d2a736afa76d1aeb"} Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.582555 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.585534 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.595952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.596109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.596185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.609488 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.637016 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.700158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.700533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.700810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.701781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.701890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.728923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"certified-operators-mbgjr\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.803421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") pod \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.803666 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") pod \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.803687 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") pod \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\" (UID: \"87fd180d-a717-4e4f-92fc-e8e77f2d303c\") " Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.805186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume" (OuterVolumeSpecName: "config-volume") pod "87fd180d-a717-4e4f-92fc-e8e77f2d303c" (UID: "87fd180d-a717-4e4f-92fc-e8e77f2d303c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.808432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2" (OuterVolumeSpecName: "kube-api-access-mpqc2") pod "87fd180d-a717-4e4f-92fc-e8e77f2d303c" (UID: "87fd180d-a717-4e4f-92fc-e8e77f2d303c"). InnerVolumeSpecName "kube-api-access-mpqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.812700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87fd180d-a717-4e4f-92fc-e8e77f2d303c" (UID: "87fd180d-a717-4e4f-92fc-e8e77f2d303c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.907451 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fd180d-a717-4e4f-92fc-e8e77f2d303c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.907493 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fd180d-a717-4e4f-92fc-e8e77f2d303c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.907509 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpqc2\" (UniqueName: \"kubernetes.io/projected/87fd180d-a717-4e4f-92fc-e8e77f2d303c-kube-api-access-mpqc2\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:03 crc kubenswrapper[4751]: I0130 22:00:03.948388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.160446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" event={"ID":"87fd180d-a717-4e4f-92fc-e8e77f2d303c","Type":"ContainerDied","Data":"cd852f618c14d77a9840e0cfcbd5ee8f8500a03a169a9c644de32c3791b5d569"} Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.160492 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd852f618c14d77a9840e0cfcbd5ee8f8500a03a169a9c644de32c3791b5d569" Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.160551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6" Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.526774 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.718587 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 22:00:04 crc kubenswrapper[4751]: I0130 22:00:04.730584 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496795-lg25p"] Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.173585 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerID="4dcaa711832a9bcefff451d85870a1e1c9f1f1df5c264b8880f8f7854b2f6a5e" exitCode=0 Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.173645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"4dcaa711832a9bcefff451d85870a1e1c9f1f1df5c264b8880f8f7854b2f6a5e"} Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.173911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerStarted","Data":"2295f2e02e7ab4a280ce1245b46978ccac929c130b2e9ec0a55b1dbdfa326de1"} Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.176446 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:00:05 crc kubenswrapper[4751]: I0130 22:00:05.992698 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9ed63a-23a2-4b50-a290-0409ff14fd95" path="/var/lib/kubelet/pods/cc9ed63a-23a2-4b50-a290-0409ff14fd95/volumes" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.012644 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:00:08 crc kubenswrapper[4751]: E0130 22:00:08.013493 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerName="collect-profiles" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.013507 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerName="collect-profiles" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.013799 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" containerName="collect-profiles" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.015744 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.031339 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.136073 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.136765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.137195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.240167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.240314 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.240495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.241107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.241119 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.267446 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"redhat-operators-h4rng\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.349344 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:08 crc kubenswrapper[4751]: I0130 22:00:08.889556 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.217422 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" exitCode=0 Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.217528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454"} Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.217965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerStarted","Data":"ea88b11c8501c2661a48b1f2780fa528f995da90eb3c23827d58fc3d93966644"} Jan 30 22:00:09 crc kubenswrapper[4751]: I0130 22:00:09.221587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerStarted","Data":"145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c"} Jan 30 22:00:11 crc kubenswrapper[4751]: I0130 22:00:11.244622 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerStarted","Data":"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923"} Jan 30 22:00:13 crc kubenswrapper[4751]: I0130 22:00:13.269834 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerID="145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c" exitCode=0 Jan 30 22:00:13 crc kubenswrapper[4751]: I0130 22:00:13.269899 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c"} Jan 30 22:00:15 crc kubenswrapper[4751]: I0130 22:00:15.307538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerStarted","Data":"ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea"} Jan 30 22:00:15 crc kubenswrapper[4751]: I0130 22:00:15.330808 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mbgjr" podStartSLOduration=3.139745247 podStartE2EDuration="12.330788062s" podCreationTimestamp="2026-01-30 22:00:03 +0000 UTC" firstStartedPulling="2026-01-30 22:00:05.176217045 +0000 UTC m=+2743.922039694" lastFinishedPulling="2026-01-30 22:00:14.36725986 +0000 UTC m=+2753.113082509" observedRunningTime="2026-01-30 22:00:15.324732426 +0000 UTC m=+2754.070555085" watchObservedRunningTime="2026-01-30 22:00:15.330788062 +0000 UTC m=+2754.076610711" Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.385859 4751 generic.go:334] "Generic (PLEG): container finished" podID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerID="e55a222a8cf3285625e24882cb0407f200684e42ff8e342e2e19480733bf455c" exitCode=0 Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.385935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerDied","Data":"e55a222a8cf3285625e24882cb0407f200684e42ff8e342e2e19480733bf455c"} Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.949527 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:23 crc kubenswrapper[4751]: I0130 22:00:23.949781 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:24 crc kubenswrapper[4751]: I0130 22:00:24.901219 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001489 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.001769 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") pod \"64c0e484-536b-4bf5-9f35-2bfc04b14133\" (UID: \"64c0e484-536b-4bf5-9f35-2bfc04b14133\") " Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.006963 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mbgjr" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:25 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:25 crc kubenswrapper[4751]: > Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.009858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.015782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7" (OuterVolumeSpecName: "kube-api-access-hqtd7") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "kube-api-access-hqtd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.036993 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.041285 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.043795 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory" (OuterVolumeSpecName: "inventory") pod "64c0e484-536b-4bf5-9f35-2bfc04b14133" (UID: "64c0e484-536b-4bf5-9f35-2bfc04b14133"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105260 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105297 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqtd7\" (UniqueName: \"kubernetes.io/projected/64c0e484-536b-4bf5-9f35-2bfc04b14133-kube-api-access-hqtd7\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105367 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105382 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.105393 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0e484-536b-4bf5-9f35-2bfc04b14133-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.410011 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" event={"ID":"64c0e484-536b-4bf5-9f35-2bfc04b14133","Type":"ContainerDied","Data":"9a026eb729a34d4b477952f2632ed15c11d6b22037f08e83aabf1428e03de6e2"} Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.410432 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a026eb729a34d4b477952f2632ed15c11d6b22037f08e83aabf1428e03de6e2" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.410061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9j62f" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.545318 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv"] Jan 30 22:00:25 crc kubenswrapper[4751]: E0130 22:00:25.549997 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.550021 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.550277 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c0e484-536b-4bf5-9f35-2bfc04b14133" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.561144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.567815 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.567923 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568026 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568052 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568112 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.568202 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.573651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv"] Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.575752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.722823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.722906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.722969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723107 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.723191 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825574 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825835 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825890 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.825999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.826071 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.826149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.826172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.827036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.831549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.831809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.833855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.835945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.836000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.836243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.837136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.850566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gjscv\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:25 crc kubenswrapper[4751]: I0130 22:00:25.886829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:00:26 crc kubenswrapper[4751]: W0130 22:00:26.482541 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7165caae_e471_463b_9f66_be7fb4c7c463.slice/crio-c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302 WatchSource:0}: Error finding container c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302: Status 404 returned error can't find the container with id c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302 Jan 30 22:00:26 crc kubenswrapper[4751]: I0130 22:00:26.483364 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv"] Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.434112 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerStarted","Data":"c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302"} Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.437817 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" exitCode=0 Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.437883 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923"} Jan 30 22:00:27 crc kubenswrapper[4751]: I0130 22:00:27.568065 4751 scope.go:117] "RemoveContainer" containerID="e542d53fa8d38b44c5415e62c079644bf8fb944ad32fd45452254fbadf2caa51" Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.449502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerStarted","Data":"4286082d44313eb3c72a73df8ca40afdcf8a7a69e10559ba7896eeef94d61e37"} Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.452455 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerStarted","Data":"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343"} Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.463938 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" podStartSLOduration=2.147722824 podStartE2EDuration="3.463917619s" podCreationTimestamp="2026-01-30 22:00:25 +0000 UTC" firstStartedPulling="2026-01-30 22:00:26.484626175 +0000 UTC m=+2765.230448824" lastFinishedPulling="2026-01-30 22:00:27.80082097 +0000 UTC m=+2766.546643619" observedRunningTime="2026-01-30 22:00:28.463167878 +0000 UTC m=+2767.208990547" watchObservedRunningTime="2026-01-30 22:00:28.463917619 +0000 UTC m=+2767.209740258" Jan 30 22:00:28 crc kubenswrapper[4751]: I0130 22:00:28.484590 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h4rng" podStartSLOduration=2.7103253 podStartE2EDuration="21.484570964s" podCreationTimestamp="2026-01-30 22:00:07 +0000 UTC" firstStartedPulling="2026-01-30 22:00:09.219365227 +0000 UTC m=+2747.965187876" lastFinishedPulling="2026-01-30 22:00:27.993610891 +0000 UTC m=+2766.739433540" observedRunningTime="2026-01-30 22:00:28.479115914 +0000 UTC m=+2767.224938563" watchObservedRunningTime="2026-01-30 22:00:28.484570964 +0000 UTC m=+2767.230393613" Jan 30 22:00:35 crc kubenswrapper[4751]: I0130 22:00:35.009997 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mbgjr" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:35 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:35 crc kubenswrapper[4751]: > Jan 30 22:00:38 crc kubenswrapper[4751]: I0130 22:00:38.350950 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:38 crc kubenswrapper[4751]: I0130 22:00:38.351579 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:00:39 crc kubenswrapper[4751]: I0130 22:00:39.397208 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:39 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:39 crc kubenswrapper[4751]: > Jan 30 22:00:43 crc kubenswrapper[4751]: I0130 22:00:43.996219 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:44 crc kubenswrapper[4751]: I0130 22:00:44.051369 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:44 crc kubenswrapper[4751]: I0130 22:00:44.237185 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:45 crc kubenswrapper[4751]: I0130 22:00:45.763387 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mbgjr" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" containerID="cri-o://ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea" gracePeriod=2 Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780000 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerID="ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea" exitCode=0 Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea"} Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbgjr" event={"ID":"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017","Type":"ContainerDied","Data":"2295f2e02e7ab4a280ce1245b46978ccac929c130b2e9ec0a55b1dbdfa326de1"} Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.780540 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2295f2e02e7ab4a280ce1245b46978ccac929c130b2e9ec0a55b1dbdfa326de1" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.879458 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.941758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") pod \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.941850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") pod \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.941890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") pod \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\" (UID: \"ddeaf5f6-2b13-47e2-a99e-9f23d7a84017\") " Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.942868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities" (OuterVolumeSpecName: "utilities") pod "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" (UID: "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.943061 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:46 crc kubenswrapper[4751]: I0130 22:00:46.947810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt" (OuterVolumeSpecName: "kube-api-access-8z7nt") pod "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" (UID: "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017"). InnerVolumeSpecName "kube-api-access-8z7nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.003449 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" (UID: "ddeaf5f6-2b13-47e2-a99e-9f23d7a84017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.045486 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7nt\" (UniqueName: \"kubernetes.io/projected/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-kube-api-access-8z7nt\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.045524 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.796097 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbgjr" Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.835903 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.846227 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mbgjr"] Jan 30 22:00:47 crc kubenswrapper[4751]: I0130 22:00:47.989571 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" path="/var/lib/kubelet/pods/ddeaf5f6-2b13-47e2-a99e-9f23d7a84017/volumes" Jan 30 22:00:49 crc kubenswrapper[4751]: I0130 22:00:49.396185 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:49 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:49 crc kubenswrapper[4751]: > Jan 30 22:00:59 crc kubenswrapper[4751]: I0130 22:00:59.398262 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:00:59 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:00:59 crc kubenswrapper[4751]: > Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.173454 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496841-qnsrj"] Jan 30 22:01:00 crc kubenswrapper[4751]: E0130 22:01:00.174035 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-content" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174049 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-content" Jan 30 22:01:00 crc kubenswrapper[4751]: E0130 22:01:00.174067 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-utilities" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174073 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="extract-utilities" Jan 30 22:01:00 crc kubenswrapper[4751]: E0130 22:01:00.174089 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174094 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.174388 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddeaf5f6-2b13-47e2-a99e-9f23d7a84017" containerName="registry-server" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.175214 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.211839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496841-qnsrj"] Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.274829 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.274930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.275114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.275151 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.376721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.376915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.376951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.377043 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.393411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"keystone-cron-29496841-qnsrj\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:00 crc kubenswrapper[4751]: I0130 22:01:00.510103 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.090365 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496841-qnsrj"] Jan 30 22:01:01 crc kubenswrapper[4751]: W0130 22:01:01.098110 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec292c3e_470e_4f61_92e9_4e2c8098f879.slice/crio-a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0 WatchSource:0}: Error finding container a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0: Status 404 returned error can't find the container with id a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0 Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.958059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerStarted","Data":"b88daa692ab905bb5eba548f715cb26848d06e66af1868ac78813ab3b7cb8c31"} Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.958471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerStarted","Data":"a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0"} Jan 30 22:01:01 crc kubenswrapper[4751]: I0130 22:01:01.976724 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496841-qnsrj" podStartSLOduration=1.976706209 podStartE2EDuration="1.976706209s" podCreationTimestamp="2026-01-30 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:01:01.973994156 +0000 UTC m=+2800.719816805" watchObservedRunningTime="2026-01-30 22:01:01.976706209 +0000 UTC m=+2800.722528858" Jan 30 22:01:05 crc kubenswrapper[4751]: I0130 22:01:05.054025 4751 generic.go:334] "Generic (PLEG): container finished" podID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerID="b88daa692ab905bb5eba548f715cb26848d06e66af1868ac78813ab3b7cb8c31" exitCode=0 Jan 30 22:01:05 crc kubenswrapper[4751]: I0130 22:01:05.054119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerDied","Data":"b88daa692ab905bb5eba548f715cb26848d06e66af1868ac78813ab3b7cb8c31"} Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.478391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.647616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.647796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.648008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.648140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") pod \"ec292c3e-470e-4f61-92e9-4e2c8098f879\" (UID: \"ec292c3e-470e-4f61-92e9-4e2c8098f879\") " Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.654650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9" (OuterVolumeSpecName: "kube-api-access-nmsl9") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "kube-api-access-nmsl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.654793 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.688268 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.727221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data" (OuterVolumeSpecName: "config-data") pod "ec292c3e-470e-4f61-92e9-4e2c8098f879" (UID: "ec292c3e-470e-4f61-92e9-4e2c8098f879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751609 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751651 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751668 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmsl9\" (UniqueName: \"kubernetes.io/projected/ec292c3e-470e-4f61-92e9-4e2c8098f879-kube-api-access-nmsl9\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:06 crc kubenswrapper[4751]: I0130 22:01:06.751682 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec292c3e-470e-4f61-92e9-4e2c8098f879-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:07 crc kubenswrapper[4751]: I0130 22:01:07.076760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496841-qnsrj" event={"ID":"ec292c3e-470e-4f61-92e9-4e2c8098f879","Type":"ContainerDied","Data":"a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0"} Jan 30 22:01:07 crc kubenswrapper[4751]: I0130 22:01:07.076990 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a41e99fabcd417d94a957674e0615513fa1a98fd6536bf33a451c8f9a839fdb0" Jan 30 22:01:07 crc kubenswrapper[4751]: I0130 22:01:07.076821 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496841-qnsrj" Jan 30 22:01:08 crc kubenswrapper[4751]: I0130 22:01:08.410172 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:08 crc kubenswrapper[4751]: I0130 22:01:08.468125 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:09 crc kubenswrapper[4751]: I0130 22:01:09.162848 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.101405 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h4rng" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" containerID="cri-o://a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" gracePeriod=2 Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.796072 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.964697 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") pod \"6f296323-78aa-4bcb-8418-898e0d7b775e\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.965016 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") pod \"6f296323-78aa-4bcb-8418-898e0d7b775e\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.965097 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") pod \"6f296323-78aa-4bcb-8418-898e0d7b775e\" (UID: \"6f296323-78aa-4bcb-8418-898e0d7b775e\") " Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.965540 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities" (OuterVolumeSpecName: "utilities") pod "6f296323-78aa-4bcb-8418-898e0d7b775e" (UID: "6f296323-78aa-4bcb-8418-898e0d7b775e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.966045 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:10 crc kubenswrapper[4751]: I0130 22:01:10.971618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq" (OuterVolumeSpecName: "kube-api-access-2kfwq") pod "6f296323-78aa-4bcb-8418-898e0d7b775e" (UID: "6f296323-78aa-4bcb-8418-898e0d7b775e"). InnerVolumeSpecName "kube-api-access-2kfwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.074685 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kfwq\" (UniqueName: \"kubernetes.io/projected/6f296323-78aa-4bcb-8418-898e0d7b775e-kube-api-access-2kfwq\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.092857 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f296323-78aa-4bcb-8418-898e0d7b775e" (UID: "6f296323-78aa-4bcb-8418-898e0d7b775e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112754 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" exitCode=0 Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343"} Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112823 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4rng" event={"ID":"6f296323-78aa-4bcb-8418-898e0d7b775e","Type":"ContainerDied","Data":"ea88b11c8501c2661a48b1f2780fa528f995da90eb3c23827d58fc3d93966644"} Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112839 4751 scope.go:117] "RemoveContainer" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.112960 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4rng" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.144828 4751 scope.go:117] "RemoveContainer" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.152518 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.162976 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h4rng"] Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.177839 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f296323-78aa-4bcb-8418-898e0d7b775e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.180698 4751 scope.go:117] "RemoveContainer" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.226233 4751 scope.go:117] "RemoveContainer" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" Jan 30 22:01:11 crc kubenswrapper[4751]: E0130 22:01:11.226714 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343\": container with ID starting with a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343 not found: ID does not exist" containerID="a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.226748 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343"} err="failed to get container status \"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343\": rpc error: code = NotFound desc = could not find container \"a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343\": container with ID starting with a08661deaa76429daed052d597f575ca986586b19c46b93459c2974d26d44343 not found: ID does not exist" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.226770 4751 scope.go:117] "RemoveContainer" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" Jan 30 22:01:11 crc kubenswrapper[4751]: E0130 22:01:11.227061 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923\": container with ID starting with bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923 not found: ID does not exist" containerID="bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.227186 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923"} err="failed to get container status \"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923\": rpc error: code = NotFound desc = could not find container \"bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923\": container with ID starting with bf63f76e3da4f87b298f0312635796b3cf88cbcb082c7fb3553ac3bedb997923 not found: ID does not exist" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.227296 4751 scope.go:117] "RemoveContainer" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" Jan 30 22:01:11 crc kubenswrapper[4751]: E0130 22:01:11.227664 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454\": container with ID starting with 3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454 not found: ID does not exist" containerID="3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.227696 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454"} err="failed to get container status \"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454\": rpc error: code = NotFound desc = could not find container \"3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454\": container with ID starting with 3c8497c3258c93d66fb519273460d98690093b33a493d7378fb7c4a7c131f454 not found: ID does not exist" Jan 30 22:01:11 crc kubenswrapper[4751]: I0130 22:01:11.996127 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" path="/var/lib/kubelet/pods/6f296323-78aa-4bcb-8418-898e0d7b775e/volumes" Jan 30 22:01:24 crc kubenswrapper[4751]: I0130 22:01:24.126592 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:24 crc kubenswrapper[4751]: I0130 22:01:24.127182 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:01:54 crc kubenswrapper[4751]: I0130 22:01:54.126791 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:01:54 crc kubenswrapper[4751]: I0130 22:01:54.127440 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.337412 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339053 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-content" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-content" Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339108 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339117 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339149 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-utilities" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339158 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="extract-utilities" Jan 30 22:02:06 crc kubenswrapper[4751]: E0130 22:02:06.339176 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerName="keystone-cron" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339184 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerName="keystone-cron" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339457 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec292c3e-470e-4f61-92e9-4e2c8098f879" containerName="keystone-cron" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.339495 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f296323-78aa-4bcb-8418-898e0d7b775e" containerName="registry-server" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.341686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.350099 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.489514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.489986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.490202 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.592503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.592594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.592656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.593000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.593036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.617550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"community-operators-8c8mn\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:06 crc kubenswrapper[4751]: I0130 22:02:06.666632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.261687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.985476 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" exitCode=0 Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.988459 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13"} Jan 30 22:02:07 crc kubenswrapper[4751]: I0130 22:02:07.988500 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerStarted","Data":"1e64c64ad003c23a57c893ae9b9bbce3beefa15f8e5c6d4a0e289ffc571f4bf7"} Jan 30 22:02:09 crc kubenswrapper[4751]: I0130 22:02:09.015044 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerStarted","Data":"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc"} Jan 30 22:02:12 crc kubenswrapper[4751]: I0130 22:02:12.043825 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" exitCode=0 Jan 30 22:02:12 crc kubenswrapper[4751]: I0130 22:02:12.043891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc"} Jan 30 22:02:13 crc kubenswrapper[4751]: I0130 22:02:13.059050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerStarted","Data":"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216"} Jan 30 22:02:13 crc kubenswrapper[4751]: I0130 22:02:13.097671 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8c8mn" podStartSLOduration=2.639238981 podStartE2EDuration="7.097648883s" podCreationTimestamp="2026-01-30 22:02:06 +0000 UTC" firstStartedPulling="2026-01-30 22:02:07.987560734 +0000 UTC m=+2866.733383383" lastFinishedPulling="2026-01-30 22:02:12.445970636 +0000 UTC m=+2871.191793285" observedRunningTime="2026-01-30 22:02:13.091648259 +0000 UTC m=+2871.837470908" watchObservedRunningTime="2026-01-30 22:02:13.097648883 +0000 UTC m=+2871.843471532" Jan 30 22:02:16 crc kubenswrapper[4751]: I0130 22:02:16.667944 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:16 crc kubenswrapper[4751]: I0130 22:02:16.668712 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:16 crc kubenswrapper[4751]: I0130 22:02:16.739723 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:17 crc kubenswrapper[4751]: I0130 22:02:17.205914 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:17 crc kubenswrapper[4751]: I0130 22:02:17.927496 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.121376 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8c8mn" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" containerID="cri-o://d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" gracePeriod=2 Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.681299 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.806669 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") pod \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.806746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") pod \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.806950 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") pod \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\" (UID: \"c9fecd57-6bc8-4fc8-b188-e885cefc9f84\") " Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.807805 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities" (OuterVolumeSpecName: "utilities") pod "c9fecd57-6bc8-4fc8-b188-e885cefc9f84" (UID: "c9fecd57-6bc8-4fc8-b188-e885cefc9f84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.811828 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn" (OuterVolumeSpecName: "kube-api-access-b5bdn") pod "c9fecd57-6bc8-4fc8-b188-e885cefc9f84" (UID: "c9fecd57-6bc8-4fc8-b188-e885cefc9f84"). InnerVolumeSpecName "kube-api-access-b5bdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.862791 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9fecd57-6bc8-4fc8-b188-e885cefc9f84" (UID: "c9fecd57-6bc8-4fc8-b188-e885cefc9f84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.913570 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.913607 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5bdn\" (UniqueName: \"kubernetes.io/projected/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-kube-api-access-b5bdn\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:19 crc kubenswrapper[4751]: I0130 22:02:19.913618 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9fecd57-6bc8-4fc8-b188-e885cefc9f84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162664 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" exitCode=0 Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216"} Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8c8mn" event={"ID":"c9fecd57-6bc8-4fc8-b188-e885cefc9f84","Type":"ContainerDied","Data":"1e64c64ad003c23a57c893ae9b9bbce3beefa15f8e5c6d4a0e289ffc571f4bf7"} Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162759 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8c8mn" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.162842 4751 scope.go:117] "RemoveContainer" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.205605 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.208527 4751 scope.go:117] "RemoveContainer" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.219049 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8c8mn"] Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.251545 4751 scope.go:117] "RemoveContainer" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.319518 4751 scope.go:117] "RemoveContainer" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" Jan 30 22:02:20 crc kubenswrapper[4751]: E0130 22:02:20.325034 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216\": container with ID starting with d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216 not found: ID does not exist" containerID="d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325074 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216"} err="failed to get container status \"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216\": rpc error: code = NotFound desc = could not find container \"d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216\": container with ID starting with d2e033294b8f105a12cb9c0c33a5f4368f4c160277191768b5f279863c6d7216 not found: ID does not exist" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325100 4751 scope.go:117] "RemoveContainer" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" Jan 30 22:02:20 crc kubenswrapper[4751]: E0130 22:02:20.325503 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc\": container with ID starting with 4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc not found: ID does not exist" containerID="4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325527 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc"} err="failed to get container status \"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc\": rpc error: code = NotFound desc = could not find container \"4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc\": container with ID starting with 4da657573b29eb65b359b69a05969a511c97f8de01b2ddfe81969088430838dc not found: ID does not exist" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325541 4751 scope.go:117] "RemoveContainer" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" Jan 30 22:02:20 crc kubenswrapper[4751]: E0130 22:02:20.325766 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13\": container with ID starting with ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13 not found: ID does not exist" containerID="ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13" Jan 30 22:02:20 crc kubenswrapper[4751]: I0130 22:02:20.325784 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13"} err="failed to get container status \"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13\": rpc error: code = NotFound desc = could not find container \"ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13\": container with ID starting with ccb9407cf44f42b5a1446162192238689754a199e7871ca9d455ff5f30635f13 not found: ID does not exist" Jan 30 22:02:21 crc kubenswrapper[4751]: I0130 22:02:21.996412 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" path="/var/lib/kubelet/pods/c9fecd57-6bc8-4fc8-b188-e885cefc9f84/volumes" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.127063 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.127672 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.127722 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.128828 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:02:24 crc kubenswrapper[4751]: I0130 22:02:24.128895 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" gracePeriod=600 Jan 30 22:02:24 crc kubenswrapper[4751]: E0130 22:02:24.266618 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.237952 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" exitCode=0 Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.238043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0"} Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.238267 4751 scope.go:117] "RemoveContainer" containerID="72b97ad134c710248fa542e7f2b4ac03f85859885b9de5fb88903a4ed9925d19" Jan 30 22:02:25 crc kubenswrapper[4751]: I0130 22:02:25.239018 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:02:25 crc kubenswrapper[4751]: E0130 22:02:25.239291 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:02:29 crc kubenswrapper[4751]: I0130 22:02:29.294413 4751 generic.go:334] "Generic (PLEG): container finished" podID="7165caae-e471-463b-9f66-be7fb4c7c463" containerID="4286082d44313eb3c72a73df8ca40afdcf8a7a69e10559ba7896eeef94d61e37" exitCode=0 Jan 30 22:02:29 crc kubenswrapper[4751]: I0130 22:02:29.294515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerDied","Data":"4286082d44313eb3c72a73df8ca40afdcf8a7a69e10559ba7896eeef94d61e37"} Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.870822 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993420 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993452 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993480 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993660 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:30 crc kubenswrapper[4751]: I0130 22:02:30.993844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") pod \"7165caae-e471-463b-9f66-be7fb4c7c463\" (UID: \"7165caae-e471-463b-9f66-be7fb4c7c463\") " Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:30.999952 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:30.999982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt" (OuterVolumeSpecName: "kube-api-access-cbpmt") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "kube-api-access-cbpmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.026825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.039199 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047656 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047696 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory" (OuterVolumeSpecName: "inventory") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.047770 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.049473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7165caae-e471-463b-9f66-be7fb4c7c463" (UID: "7165caae-e471-463b-9f66-be7fb4c7c463"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098238 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098268 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbpmt\" (UniqueName: \"kubernetes.io/projected/7165caae-e471-463b-9f66-be7fb4c7c463-kube-api-access-cbpmt\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098279 4751 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098290 4751 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7165caae-e471-463b-9f66-be7fb4c7c463-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098300 4751 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098313 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098321 4751 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098349 4751 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.098359 4751 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7165caae-e471-463b-9f66-be7fb4c7c463-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.315480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" event={"ID":"7165caae-e471-463b-9f66-be7fb4c7c463","Type":"ContainerDied","Data":"c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302"} Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.315731 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6452be326750e2048619c44dd9a07b67cfa00b1469a8de0a0fe91a76b4bd302" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.315530 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gjscv" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428264 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx"] Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428725 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7165caae-e471-463b-9f66-be7fb4c7c463" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428744 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7165caae-e471-463b-9f66-be7fb4c7c463" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428759 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-content" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428767 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-content" Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428790 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-utilities" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428796 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="extract-utilities" Jan 30 22:02:31 crc kubenswrapper[4751]: E0130 22:02:31.428815 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.428821 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.429034 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7165caae-e471-463b-9f66-be7fb4c7c463" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.429054 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fecd57-6bc8-4fc8-b188-e885cefc9f84" containerName="registry-server" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.430320 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.441912 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.441991 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.441922 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.442239 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.442419 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.452788 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx"] Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508272 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.508986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.509139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.509294 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.509541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.611983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.612083 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.612200 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.612293 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.616238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.616376 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.616771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.617035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.617807 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.619948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.639010 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:31 crc kubenswrapper[4751]: I0130 22:02:31.757507 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:02:32 crc kubenswrapper[4751]: I0130 22:02:32.445155 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx"] Jan 30 22:02:33 crc kubenswrapper[4751]: I0130 22:02:33.339538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerStarted","Data":"bfa35e5976d0f1134073dbb445bffda8319a9310e44f3019d580e15386f4c974"} Jan 30 22:02:34 crc kubenswrapper[4751]: I0130 22:02:34.351843 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerStarted","Data":"e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9"} Jan 30 22:02:34 crc kubenswrapper[4751]: I0130 22:02:34.380745 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" podStartSLOduration=2.56553063 podStartE2EDuration="3.380726408s" podCreationTimestamp="2026-01-30 22:02:31 +0000 UTC" firstStartedPulling="2026-01-30 22:02:32.441273162 +0000 UTC m=+2891.187095821" lastFinishedPulling="2026-01-30 22:02:33.25646895 +0000 UTC m=+2892.002291599" observedRunningTime="2026-01-30 22:02:34.376265096 +0000 UTC m=+2893.122087745" watchObservedRunningTime="2026-01-30 22:02:34.380726408 +0000 UTC m=+2893.126549047" Jan 30 22:02:40 crc kubenswrapper[4751]: I0130 22:02:40.976607 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:02:40 crc kubenswrapper[4751]: E0130 22:02:40.977675 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:02:55 crc kubenswrapper[4751]: I0130 22:02:55.976685 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:02:55 crc kubenswrapper[4751]: E0130 22:02:55.977646 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:10 crc kubenswrapper[4751]: I0130 22:03:10.976103 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:10 crc kubenswrapper[4751]: E0130 22:03:10.976860 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:24 crc kubenswrapper[4751]: I0130 22:03:24.975837 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:24 crc kubenswrapper[4751]: E0130 22:03:24.976660 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.704475 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.709090 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.721006 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.840210 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.840653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.841024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.943510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.943764 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.943834 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.944379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.944553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:34 crc kubenswrapper[4751]: I0130 22:03:34.967390 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"redhat-marketplace-6htrn\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.038358 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.612751 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.996511 4751 generic.go:334] "Generic (PLEG): container finished" podID="309930f6-c8a8-487c-b74e-d2010aedd851" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" exitCode=0 Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.996572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d"} Jan 30 22:03:35 crc kubenswrapper[4751]: I0130 22:03:35.996605 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerStarted","Data":"4a229657d088af7f49d719c2a6ef1d4ee422502663ee61b387d5d7457dd49ff6"} Jan 30 22:03:37 crc kubenswrapper[4751]: I0130 22:03:37.976744 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:37 crc kubenswrapper[4751]: E0130 22:03:37.977736 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:38 crc kubenswrapper[4751]: I0130 22:03:38.018601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerStarted","Data":"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53"} Jan 30 22:03:39 crc kubenswrapper[4751]: I0130 22:03:39.032146 4751 generic.go:334] "Generic (PLEG): container finished" podID="309930f6-c8a8-487c-b74e-d2010aedd851" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" exitCode=0 Jan 30 22:03:39 crc kubenswrapper[4751]: I0130 22:03:39.032535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53"} Jan 30 22:03:40 crc kubenswrapper[4751]: I0130 22:03:40.045614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerStarted","Data":"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317"} Jan 30 22:03:40 crc kubenswrapper[4751]: I0130 22:03:40.077833 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6htrn" podStartSLOduration=2.344143574 podStartE2EDuration="6.077811602s" podCreationTimestamp="2026-01-30 22:03:34 +0000 UTC" firstStartedPulling="2026-01-30 22:03:35.998476244 +0000 UTC m=+2954.744298903" lastFinishedPulling="2026-01-30 22:03:39.732144272 +0000 UTC m=+2958.477966931" observedRunningTime="2026-01-30 22:03:40.075317184 +0000 UTC m=+2958.821139833" watchObservedRunningTime="2026-01-30 22:03:40.077811602 +0000 UTC m=+2958.823634251" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.039380 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.039980 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.103759 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.168773 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:45 crc kubenswrapper[4751]: I0130 22:03:45.344688 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.129307 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6htrn" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" containerID="cri-o://21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" gracePeriod=2 Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.656293 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.714683 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") pod \"309930f6-c8a8-487c-b74e-d2010aedd851\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.714885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") pod \"309930f6-c8a8-487c-b74e-d2010aedd851\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.714952 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") pod \"309930f6-c8a8-487c-b74e-d2010aedd851\" (UID: \"309930f6-c8a8-487c-b74e-d2010aedd851\") " Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.715582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities" (OuterVolumeSpecName: "utilities") pod "309930f6-c8a8-487c-b74e-d2010aedd851" (UID: "309930f6-c8a8-487c-b74e-d2010aedd851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.720075 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj" (OuterVolumeSpecName: "kube-api-access-sv6cj") pod "309930f6-c8a8-487c-b74e-d2010aedd851" (UID: "309930f6-c8a8-487c-b74e-d2010aedd851"). InnerVolumeSpecName "kube-api-access-sv6cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.739892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "309930f6-c8a8-487c-b74e-d2010aedd851" (UID: "309930f6-c8a8-487c-b74e-d2010aedd851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.818026 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.818061 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv6cj\" (UniqueName: \"kubernetes.io/projected/309930f6-c8a8-487c-b74e-d2010aedd851-kube-api-access-sv6cj\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:47 crc kubenswrapper[4751]: I0130 22:03:47.818071 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/309930f6-c8a8-487c-b74e-d2010aedd851-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.142558 4751 generic.go:334] "Generic (PLEG): container finished" podID="309930f6-c8a8-487c-b74e-d2010aedd851" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" exitCode=0 Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.142608 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6htrn" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.142656 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317"} Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.143011 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6htrn" event={"ID":"309930f6-c8a8-487c-b74e-d2010aedd851","Type":"ContainerDied","Data":"4a229657d088af7f49d719c2a6ef1d4ee422502663ee61b387d5d7457dd49ff6"} Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.143042 4751 scope.go:117] "RemoveContainer" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.189084 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.189254 4751 scope.go:117] "RemoveContainer" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.205451 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6htrn"] Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.215755 4751 scope.go:117] "RemoveContainer" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.288448 4751 scope.go:117] "RemoveContainer" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.289588 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317\": container with ID starting with 21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317 not found: ID does not exist" containerID="21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.289620 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317"} err="failed to get container status \"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317\": rpc error: code = NotFound desc = could not find container \"21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317\": container with ID starting with 21a2f8fd1cd2e2d5667a4a2ae04fbcc78cd97684cee177cd8db4d7ba720b4317 not found: ID does not exist" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.289642 4751 scope.go:117] "RemoveContainer" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.289971 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53\": container with ID starting with d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53 not found: ID does not exist" containerID="d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.290023 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53"} err="failed to get container status \"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53\": rpc error: code = NotFound desc = could not find container \"d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53\": container with ID starting with d6313efb473c749da2977acae53eaefcca7fa3fb2484d1ce778ae6d294160a53 not found: ID does not exist" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.290057 4751 scope.go:117] "RemoveContainer" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.290819 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d\": container with ID starting with 3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d not found: ID does not exist" containerID="3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.290882 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d"} err="failed to get container status \"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d\": rpc error: code = NotFound desc = could not find container \"3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d\": container with ID starting with 3bb2f30e9f5e457827fa1b8e8ef9a30b3f8039a316885c2e4f8cec4ff209340d not found: ID does not exist" Jan 30 22:03:48 crc kubenswrapper[4751]: I0130 22:03:48.975461 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:03:48 crc kubenswrapper[4751]: E0130 22:03:48.976003 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:03:49 crc kubenswrapper[4751]: I0130 22:03:49.994022 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" path="/var/lib/kubelet/pods/309930f6-c8a8-487c-b74e-d2010aedd851/volumes" Jan 30 22:04:02 crc kubenswrapper[4751]: I0130 22:04:02.977058 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:02 crc kubenswrapper[4751]: E0130 22:04:02.977865 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:14 crc kubenswrapper[4751]: I0130 22:04:14.976714 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:14 crc kubenswrapper[4751]: E0130 22:04:14.977945 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:26 crc kubenswrapper[4751]: I0130 22:04:26.975714 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:26 crc kubenswrapper[4751]: E0130 22:04:26.976640 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:37 crc kubenswrapper[4751]: I0130 22:04:37.976930 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:37 crc kubenswrapper[4751]: E0130 22:04:37.977839 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:48 crc kubenswrapper[4751]: E0130 22:04:48.256548 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c2956e_910c_4604_a9ba_86289f854a59.slice/crio-e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c2956e_910c_4604_a9ba_86289f854a59.slice/crio-conmon-e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:04:48 crc kubenswrapper[4751]: E0130 22:04:48.256688 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c2956e_910c_4604_a9ba_86289f854a59.slice/crio-e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9.scope\": RecentStats: unable to find data in memory cache]" Jan 30 22:04:48 crc kubenswrapper[4751]: I0130 22:04:48.791290 4751 generic.go:334] "Generic (PLEG): container finished" podID="93c2956e-910c-4604-a9ba-86289f854a59" containerID="e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9" exitCode=0 Jan 30 22:04:48 crc kubenswrapper[4751]: I0130 22:04:48.791377 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerDied","Data":"e561a4f54ddbb4f307aca022b6eee2605073ff32bfb5b070e34d1d12cbb217a9"} Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.322638 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.482963 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483037 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483274 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483377 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483448 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.483514 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") pod \"93c2956e-910c-4604-a9ba-86289f854a59\" (UID: \"93c2956e-910c-4604-a9ba-86289f854a59\") " Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.490692 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6" (OuterVolumeSpecName: "kube-api-access-m7qg6") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "kube-api-access-m7qg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.497544 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.519090 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.521746 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.522699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory" (OuterVolumeSpecName: "inventory") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.526217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.530119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "93c2956e-910c-4604-a9ba-86289f854a59" (UID: "93c2956e-910c-4604-a9ba-86289f854a59"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588101 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7qg6\" (UniqueName: \"kubernetes.io/projected/93c2956e-910c-4604-a9ba-86289f854a59-kube-api-access-m7qg6\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588157 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588171 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588185 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588198 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588212 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.588226 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c2956e-910c-4604-a9ba-86289f854a59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.817016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" event={"ID":"93c2956e-910c-4604-a9ba-86289f854a59","Type":"ContainerDied","Data":"bfa35e5976d0f1134073dbb445bffda8319a9310e44f3019d580e15386f4c974"} Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.817515 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa35e5976d0f1134073dbb445bffda8319a9310e44f3019d580e15386f4c974" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.817068 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.917806 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v"] Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918402 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c2956e-910c-4604-a9ba-86289f854a59" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918418 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c2956e-910c-4604-a9ba-86289f854a59" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918440 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-utilities" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918446 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-utilities" Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918463 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918468 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" Jan 30 22:04:50 crc kubenswrapper[4751]: E0130 22:04:50.918480 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-content" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918552 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="extract-content" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918843 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="309930f6-c8a8-487c-b74e-d2010aedd851" containerName="registry-server" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.918871 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c2956e-910c-4604-a9ba-86289f854a59" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.919732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.923819 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.923937 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.924059 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.924126 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.930285 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:04:50 crc kubenswrapper[4751]: I0130 22:04:50.930925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v"] Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104007 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104242 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.104675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.105484 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.105817 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.105965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.210629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.210933 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.211710 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.212295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.217837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.218270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.218654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.219620 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.219667 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.227990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.233608 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.256470 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.817997 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v"] Jan 30 22:04:51 crc kubenswrapper[4751]: I0130 22:04:51.991579 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:04:51 crc kubenswrapper[4751]: E0130 22:04:51.992130 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:04:52 crc kubenswrapper[4751]: I0130 22:04:52.847930 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerStarted","Data":"d5854d35428dbe82b166fa29bb387cba4ba66dda4f1e4fe2b120d899a0114692"} Jan 30 22:04:52 crc kubenswrapper[4751]: I0130 22:04:52.848302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerStarted","Data":"332f4559336eaf6a29c2a93a27c6b1335414d93c7cfb9ce025b56c0626f75947"} Jan 30 22:04:52 crc kubenswrapper[4751]: I0130 22:04:52.877453 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" podStartSLOduration=2.4155313290000002 podStartE2EDuration="2.877430022s" podCreationTimestamp="2026-01-30 22:04:50 +0000 UTC" firstStartedPulling="2026-01-30 22:04:51.829498293 +0000 UTC m=+3030.575320942" lastFinishedPulling="2026-01-30 22:04:52.291396986 +0000 UTC m=+3031.037219635" observedRunningTime="2026-01-30 22:04:52.871953495 +0000 UTC m=+3031.617776164" watchObservedRunningTime="2026-01-30 22:04:52.877430022 +0000 UTC m=+3031.623252671" Jan 30 22:05:04 crc kubenswrapper[4751]: I0130 22:05:04.976082 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:04 crc kubenswrapper[4751]: E0130 22:05:04.976992 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:18 crc kubenswrapper[4751]: I0130 22:05:18.976699 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:18 crc kubenswrapper[4751]: E0130 22:05:18.977717 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:32 crc kubenswrapper[4751]: I0130 22:05:32.976389 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:32 crc kubenswrapper[4751]: E0130 22:05:32.977481 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:44 crc kubenswrapper[4751]: I0130 22:05:44.976112 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:44 crc kubenswrapper[4751]: E0130 22:05:44.977208 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:05:55 crc kubenswrapper[4751]: I0130 22:05:55.975910 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:05:55 crc kubenswrapper[4751]: E0130 22:05:55.978255 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:06 crc kubenswrapper[4751]: I0130 22:06:06.977285 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:06 crc kubenswrapper[4751]: E0130 22:06:06.978626 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:19 crc kubenswrapper[4751]: I0130 22:06:19.976068 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:19 crc kubenswrapper[4751]: E0130 22:06:19.977634 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:27 crc kubenswrapper[4751]: I0130 22:06:27.903679 4751 scope.go:117] "RemoveContainer" containerID="145648500fb4fe24047f9789895bde02ee47ed1fcc6d67993ff7dc9ab1a1638c" Jan 30 22:06:27 crc kubenswrapper[4751]: I0130 22:06:27.931146 4751 scope.go:117] "RemoveContainer" containerID="4dcaa711832a9bcefff451d85870a1e1c9f1f1df5c264b8880f8f7854b2f6a5e" Jan 30 22:06:27 crc kubenswrapper[4751]: I0130 22:06:27.992099 4751 scope.go:117] "RemoveContainer" containerID="ff119fbbce42b05dc2414135ad63839e511e523a3e25e47a1bb53a0ee43eb1ea" Jan 30 22:06:33 crc kubenswrapper[4751]: I0130 22:06:33.975624 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:33 crc kubenswrapper[4751]: E0130 22:06:33.977689 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:36 crc kubenswrapper[4751]: I0130 22:06:36.042396 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerID="d5854d35428dbe82b166fa29bb387cba4ba66dda4f1e4fe2b120d899a0114692" exitCode=0 Jan 30 22:06:36 crc kubenswrapper[4751]: I0130 22:06:36.042621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerDied","Data":"d5854d35428dbe82b166fa29bb387cba4ba66dda4f1e4fe2b120d899a0114692"} Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.557453 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661340 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661517 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661573 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.661786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.662491 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.662583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") pod \"ac636140-8b68-474a-a7f9-7d46e6a22de0\" (UID: \"ac636140-8b68-474a-a7f9-7d46e6a22de0\") " Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.669697 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.673415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf" (OuterVolumeSpecName: "kube-api-access-5clxf") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "kube-api-access-5clxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.698140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.699832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.708376 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory" (OuterVolumeSpecName: "inventory") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.708495 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.709700 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac636140-8b68-474a-a7f9-7d46e6a22de0" (UID: "ac636140-8b68-474a-a7f9-7d46e6a22de0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765870 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765905 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765916 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765930 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765939 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765949 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5clxf\" (UniqueName: \"kubernetes.io/projected/ac636140-8b68-474a-a7f9-7d46e6a22de0-kube-api-access-5clxf\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:37 crc kubenswrapper[4751]: I0130 22:06:37.765957 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac636140-8b68-474a-a7f9-7d46e6a22de0-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.066642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" event={"ID":"ac636140-8b68-474a-a7f9-7d46e6a22de0","Type":"ContainerDied","Data":"332f4559336eaf6a29c2a93a27c6b1335414d93c7cfb9ce025b56c0626f75947"} Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.066967 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332f4559336eaf6a29c2a93a27c6b1335414d93c7cfb9ce025b56c0626f75947" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.066699 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.227887 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm"] Jan 30 22:06:38 crc kubenswrapper[4751]: E0130 22:06:38.243011 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.243185 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.247368 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac636140-8b68-474a-a7f9-7d46e6a22de0" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.249249 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.254378 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.254581 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.254857 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.255054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.255347 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x6svn" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.275726 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm"] Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278923 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.278978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.382996 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383174 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383213 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.383305 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.387027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.387599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.387882 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.388292 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.402731 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jn6hm\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:38 crc kubenswrapper[4751]: I0130 22:06:38.581208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:39 crc kubenswrapper[4751]: I0130 22:06:39.132494 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:06:39 crc kubenswrapper[4751]: I0130 22:06:39.134311 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm"] Jan 30 22:06:40 crc kubenswrapper[4751]: I0130 22:06:40.091225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerStarted","Data":"91af7206bc59e56c511a5bb1490384ea5d6919d8e986e93a29d06a5b971b37d4"} Jan 30 22:06:40 crc kubenswrapper[4751]: I0130 22:06:40.091524 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerStarted","Data":"92dbf94d199fec6f4332eccc79025d40e821e34facf432cb7e9915a997164fe6"} Jan 30 22:06:40 crc kubenswrapper[4751]: I0130 22:06:40.120184 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" podStartSLOduration=1.719145718 podStartE2EDuration="2.120161691s" podCreationTimestamp="2026-01-30 22:06:38 +0000 UTC" firstStartedPulling="2026-01-30 22:06:39.132210908 +0000 UTC m=+3137.878033557" lastFinishedPulling="2026-01-30 22:06:39.533226881 +0000 UTC m=+3138.279049530" observedRunningTime="2026-01-30 22:06:40.111545959 +0000 UTC m=+3138.857368618" watchObservedRunningTime="2026-01-30 22:06:40.120161691 +0000 UTC m=+3138.865984340" Jan 30 22:06:47 crc kubenswrapper[4751]: I0130 22:06:47.983188 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:06:47 crc kubenswrapper[4751]: E0130 22:06:47.983995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:06:54 crc kubenswrapper[4751]: I0130 22:06:54.261973 4751 generic.go:334] "Generic (PLEG): container finished" podID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerID="91af7206bc59e56c511a5bb1490384ea5d6919d8e986e93a29d06a5b971b37d4" exitCode=0 Jan 30 22:06:54 crc kubenswrapper[4751]: I0130 22:06:54.262047 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerDied","Data":"91af7206bc59e56c511a5bb1490384ea5d6919d8e986e93a29d06a5b971b37d4"} Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.775621 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.920790 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921025 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921060 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.921264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") pod \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\" (UID: \"61149618-7cc3-4dd6-b61a-0fb8226f2cc1\") " Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.926852 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9" (OuterVolumeSpecName: "kube-api-access-fzrs9") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "kube-api-access-fzrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.960766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.973138 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory" (OuterVolumeSpecName: "inventory") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.974441 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:55 crc kubenswrapper[4751]: I0130 22:06:55.985302 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61149618-7cc3-4dd6-b61a-0fb8226f2cc1" (UID: "61149618-7cc3-4dd6-b61a-0fb8226f2cc1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023913 4751 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023947 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzrs9\" (UniqueName: \"kubernetes.io/projected/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-kube-api-access-fzrs9\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023956 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023967 4751 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.023979 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61149618-7cc3-4dd6-b61a-0fb8226f2cc1-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.283403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" event={"ID":"61149618-7cc3-4dd6-b61a-0fb8226f2cc1","Type":"ContainerDied","Data":"92dbf94d199fec6f4332eccc79025d40e821e34facf432cb7e9915a997164fe6"} Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.283448 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92dbf94d199fec6f4332eccc79025d40e821e34facf432cb7e9915a997164fe6" Jan 30 22:06:56 crc kubenswrapper[4751]: I0130 22:06:56.283501 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jn6hm" Jan 30 22:07:01 crc kubenswrapper[4751]: I0130 22:07:01.985797 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:07:01 crc kubenswrapper[4751]: E0130 22:07:01.986594 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:07:13 crc kubenswrapper[4751]: I0130 22:07:13.976124 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:07:13 crc kubenswrapper[4751]: E0130 22:07:13.976909 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:07:24 crc kubenswrapper[4751]: I0130 22:07:24.977091 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:07:25 crc kubenswrapper[4751]: I0130 22:07:25.578792 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619"} Jan 30 22:07:44 crc kubenswrapper[4751]: E0130 22:07:44.043270 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:46226->38.102.83.39:41127: write tcp 38.102.83.39:46226->38.102.83.39:41127: write: broken pipe Jan 30 22:09:54 crc kubenswrapper[4751]: I0130 22:09:54.126364 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:09:54 crc kubenswrapper[4751]: I0130 22:09:54.128011 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:24 crc kubenswrapper[4751]: I0130 22:10:24.126341 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:10:24 crc kubenswrapper[4751]: I0130 22:10:24.126938 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.127394 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.128014 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.128073 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.129026 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.129087 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619" gracePeriod=600 Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.768841 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619" exitCode=0 Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.769268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619"} Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.769293 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a"} Jan 30 22:10:54 crc kubenswrapper[4751]: I0130 22:10:54.769309 4751 scope.go:117] "RemoveContainer" containerID="6b4104fb342bd7b65d4f4b38c6e46e45701afa8041dd19349dd8fc8307a2f6e0" Jan 30 22:11:02 crc kubenswrapper[4751]: E0130 22:11:02.773358 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:38106->38.102.83.39:41127: write tcp 38.102.83.39:38106->38.102.83.39:41127: write: broken pipe Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.706040 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:34 crc kubenswrapper[4751]: E0130 22:11:34.707113 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.707128 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.707373 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61149618-7cc3-4dd6-b61a-0fb8226f2cc1" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.709003 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.723369 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.836300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.836720 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.836898 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.939628 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.939771 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.940021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.940793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.940834 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:34 crc kubenswrapper[4751]: I0130 22:11:34.960941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"certified-operators-g256h\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:35 crc kubenswrapper[4751]: I0130 22:11:35.029036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:35 crc kubenswrapper[4751]: I0130 22:11:35.737111 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:36 crc kubenswrapper[4751]: I0130 22:11:36.222604 4751 generic.go:334] "Generic (PLEG): container finished" podID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" exitCode=0 Jan 30 22:11:36 crc kubenswrapper[4751]: I0130 22:11:36.222662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a"} Jan 30 22:11:36 crc kubenswrapper[4751]: I0130 22:11:36.222942 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerStarted","Data":"1b7078cb60a3b033349673f6ec5cf0c8630bd6c1147e95ae4b0ffca0c14c2e81"} Jan 30 22:11:37 crc kubenswrapper[4751]: I0130 22:11:37.236285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerStarted","Data":"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b"} Jan 30 22:11:39 crc kubenswrapper[4751]: I0130 22:11:39.257258 4751 generic.go:334] "Generic (PLEG): container finished" podID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" exitCode=0 Jan 30 22:11:39 crc kubenswrapper[4751]: I0130 22:11:39.257393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b"} Jan 30 22:11:39 crc kubenswrapper[4751]: I0130 22:11:39.261430 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:11:40 crc kubenswrapper[4751]: I0130 22:11:40.269142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerStarted","Data":"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e"} Jan 30 22:11:40 crc kubenswrapper[4751]: I0130 22:11:40.302715 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g256h" podStartSLOduration=2.750842593 podStartE2EDuration="6.302692564s" podCreationTimestamp="2026-01-30 22:11:34 +0000 UTC" firstStartedPulling="2026-01-30 22:11:36.224922597 +0000 UTC m=+3434.970745246" lastFinishedPulling="2026-01-30 22:11:39.776772568 +0000 UTC m=+3438.522595217" observedRunningTime="2026-01-30 22:11:40.290003383 +0000 UTC m=+3439.035826032" watchObservedRunningTime="2026-01-30 22:11:40.302692564 +0000 UTC m=+3439.048515213" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.029389 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.030085 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.084875 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.371928 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:45 crc kubenswrapper[4751]: I0130 22:11:45.424005 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.339384 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g256h" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" containerID="cri-o://bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" gracePeriod=2 Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.743315 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.748310 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.778189 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.846566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.846650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.846765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.949190 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.949297 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.949483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.950107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.953688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:47 crc kubenswrapper[4751]: I0130 22:11:47.971671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"redhat-operators-gtfms\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.069237 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.091280 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.270759 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") pod \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.271118 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") pod \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.271156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") pod \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\" (UID: \"a54efd8f-acd9-4019-8a15-da81fc80ad4d\") " Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.272080 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities" (OuterVolumeSpecName: "utilities") pod "a54efd8f-acd9-4019-8a15-da81fc80ad4d" (UID: "a54efd8f-acd9-4019-8a15-da81fc80ad4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.276457 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg" (OuterVolumeSpecName: "kube-api-access-pwbqg") pod "a54efd8f-acd9-4019-8a15-da81fc80ad4d" (UID: "a54efd8f-acd9-4019-8a15-da81fc80ad4d"). InnerVolumeSpecName "kube-api-access-pwbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.337357 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a54efd8f-acd9-4019-8a15-da81fc80ad4d" (UID: "a54efd8f-acd9-4019-8a15-da81fc80ad4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.374224 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbqg\" (UniqueName: \"kubernetes.io/projected/a54efd8f-acd9-4019-8a15-da81fc80ad4d-kube-api-access-pwbqg\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.374254 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.374264 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54efd8f-acd9-4019-8a15-da81fc80ad4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376835 4751 generic.go:334] "Generic (PLEG): container finished" podID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" exitCode=0 Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376871 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e"} Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376897 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g256h" event={"ID":"a54efd8f-acd9-4019-8a15-da81fc80ad4d","Type":"ContainerDied","Data":"1b7078cb60a3b033349673f6ec5cf0c8630bd6c1147e95ae4b0ffca0c14c2e81"} Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.376916 4751 scope.go:117] "RemoveContainer" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.377069 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g256h" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.441513 4751 scope.go:117] "RemoveContainer" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.454258 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.469042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g256h"] Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.522822 4751 scope.go:117] "RemoveContainer" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.580485 4751 scope.go:117] "RemoveContainer" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" Jan 30 22:11:48 crc kubenswrapper[4751]: E0130 22:11:48.581451 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e\": container with ID starting with bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e not found: ID does not exist" containerID="bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581492 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e"} err="failed to get container status \"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e\": rpc error: code = NotFound desc = could not find container \"bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e\": container with ID starting with bf2e550fe8c5e221a4d3c4173acd789c659c97506bb2bfc263148c7ca7985d5e not found: ID does not exist" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581518 4751 scope.go:117] "RemoveContainer" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" Jan 30 22:11:48 crc kubenswrapper[4751]: E0130 22:11:48.581774 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b\": container with ID starting with ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b not found: ID does not exist" containerID="ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581796 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b"} err="failed to get container status \"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b\": rpc error: code = NotFound desc = could not find container \"ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b\": container with ID starting with ef501f5db7cadcd6c70e26bf455fe0d9e50ec670435f9177a573a6b51013646b not found: ID does not exist" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.581811 4751 scope.go:117] "RemoveContainer" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" Jan 30 22:11:48 crc kubenswrapper[4751]: E0130 22:11:48.583410 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a\": container with ID starting with 3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a not found: ID does not exist" containerID="3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.583459 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a"} err="failed to get container status \"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a\": rpc error: code = NotFound desc = could not find container \"3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a\": container with ID starting with 3209c9707b371fe08c3442ba8b1fa21267d43f3d5b2980ea89cc264d338ce17a not found: ID does not exist" Jan 30 22:11:48 crc kubenswrapper[4751]: I0130 22:11:48.766646 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.390527 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" exitCode=0 Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.390650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610"} Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.390844 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerStarted","Data":"8454d78d5588531b7a20a01473bd459440e4383a613aff3fdd243ec56ac18a03"} Jan 30 22:11:49 crc kubenswrapper[4751]: I0130 22:11:49.989803 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" path="/var/lib/kubelet/pods/a54efd8f-acd9-4019-8a15-da81fc80ad4d/volumes" Jan 30 22:11:50 crc kubenswrapper[4751]: I0130 22:11:50.407307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerStarted","Data":"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2"} Jan 30 22:11:55 crc kubenswrapper[4751]: I0130 22:11:55.460086 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" exitCode=0 Jan 30 22:11:55 crc kubenswrapper[4751]: I0130 22:11:55.460606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2"} Jan 30 22:11:56 crc kubenswrapper[4751]: I0130 22:11:56.475610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerStarted","Data":"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69"} Jan 30 22:11:56 crc kubenswrapper[4751]: I0130 22:11:56.504105 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gtfms" podStartSLOduration=3.068434338 podStartE2EDuration="9.504086103s" podCreationTimestamp="2026-01-30 22:11:47 +0000 UTC" firstStartedPulling="2026-01-30 22:11:49.394036081 +0000 UTC m=+3448.139858730" lastFinishedPulling="2026-01-30 22:11:55.829687846 +0000 UTC m=+3454.575510495" observedRunningTime="2026-01-30 22:11:56.497435613 +0000 UTC m=+3455.243258272" watchObservedRunningTime="2026-01-30 22:11:56.504086103 +0000 UTC m=+3455.249908752" Jan 30 22:11:58 crc kubenswrapper[4751]: I0130 22:11:58.070283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:58 crc kubenswrapper[4751]: I0130 22:11:58.070616 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:11:59 crc kubenswrapper[4751]: I0130 22:11:59.119141 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" probeResult="failure" output=< Jan 30 22:11:59 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:11:59 crc kubenswrapper[4751]: > Jan 30 22:12:09 crc kubenswrapper[4751]: I0130 22:12:09.121575 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" probeResult="failure" output=< Jan 30 22:12:09 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:12:09 crc kubenswrapper[4751]: > Jan 30 22:12:19 crc kubenswrapper[4751]: I0130 22:12:19.135088 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" probeResult="failure" output=< Jan 30 22:12:19 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:12:19 crc kubenswrapper[4751]: > Jan 30 22:12:28 crc kubenswrapper[4751]: I0130 22:12:28.135089 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:28 crc kubenswrapper[4751]: I0130 22:12:28.188588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:28 crc kubenswrapper[4751]: I0130 22:12:28.381965 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:12:29 crc kubenswrapper[4751]: I0130 22:12:29.835031 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gtfms" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" containerID="cri-o://a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" gracePeriod=2 Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.452760 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.517081 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") pod \"e5e3f459-601d-4d72-a9c9-8113f86749e6\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.517157 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") pod \"e5e3f459-601d-4d72-a9c9-8113f86749e6\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.517240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") pod \"e5e3f459-601d-4d72-a9c9-8113f86749e6\" (UID: \"e5e3f459-601d-4d72-a9c9-8113f86749e6\") " Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.518401 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities" (OuterVolumeSpecName: "utilities") pod "e5e3f459-601d-4d72-a9c9-8113f86749e6" (UID: "e5e3f459-601d-4d72-a9c9-8113f86749e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.522893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8" (OuterVolumeSpecName: "kube-api-access-mqhc8") pod "e5e3f459-601d-4d72-a9c9-8113f86749e6" (UID: "e5e3f459-601d-4d72-a9c9-8113f86749e6"). InnerVolumeSpecName "kube-api-access-mqhc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.619282 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqhc8\" (UniqueName: \"kubernetes.io/projected/e5e3f459-601d-4d72-a9c9-8113f86749e6-kube-api-access-mqhc8\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.619313 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.644796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5e3f459-601d-4d72-a9c9-8113f86749e6" (UID: "e5e3f459-601d-4d72-a9c9-8113f86749e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.721655 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e3f459-601d-4d72-a9c9-8113f86749e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858870 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" exitCode=0 Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69"} Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtfms" event={"ID":"e5e3f459-601d-4d72-a9c9-8113f86749e6","Type":"ContainerDied","Data":"8454d78d5588531b7a20a01473bd459440e4383a613aff3fdd243ec56ac18a03"} Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.858989 4751 scope.go:117] "RemoveContainer" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.859012 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtfms" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.897655 4751 scope.go:117] "RemoveContainer" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.913357 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.933165 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gtfms"] Jan 30 22:12:30 crc kubenswrapper[4751]: I0130 22:12:30.939837 4751 scope.go:117] "RemoveContainer" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.003378 4751 scope.go:117] "RemoveContainer" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" Jan 30 22:12:31 crc kubenswrapper[4751]: E0130 22:12:31.003925 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69\": container with ID starting with a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69 not found: ID does not exist" containerID="a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.004083 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69"} err="failed to get container status \"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69\": rpc error: code = NotFound desc = could not find container \"a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69\": container with ID starting with a11579f08b384e0a99ab401169f8b2dccf25446e9b6bf16ee96ca3016dcf2e69 not found: ID does not exist" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.004246 4751 scope.go:117] "RemoveContainer" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" Jan 30 22:12:31 crc kubenswrapper[4751]: E0130 22:12:31.004783 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2\": container with ID starting with 86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2 not found: ID does not exist" containerID="86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.004958 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2"} err="failed to get container status \"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2\": rpc error: code = NotFound desc = could not find container \"86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2\": container with ID starting with 86d3814e4b789663793e35385bd4da11af9e9bb619d1749d6030676a27c663d2 not found: ID does not exist" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.005121 4751 scope.go:117] "RemoveContainer" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" Jan 30 22:12:31 crc kubenswrapper[4751]: E0130 22:12:31.005568 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610\": container with ID starting with f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610 not found: ID does not exist" containerID="f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.005594 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610"} err="failed to get container status \"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610\": rpc error: code = NotFound desc = could not find container \"f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610\": container with ID starting with f62194cd5691a9a1ce6156c18dd4fba7f56689c9bbabf66bc559093a26550610 not found: ID does not exist" Jan 30 22:12:31 crc kubenswrapper[4751]: I0130 22:12:31.991308 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" path="/var/lib/kubelet/pods/e5e3f459-601d-4d72-a9c9-8113f86749e6/volumes" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.783105 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784072 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784086 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784099 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784105 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784124 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784131 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="extract-utilities" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784172 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784179 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="extract-content" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784195 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784202 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: E0130 22:12:36.784207 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784213 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784435 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54efd8f-acd9-4019-8a15-da81fc80ad4d" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.784461 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e3f459-601d-4d72-a9c9-8113f86749e6" containerName="registry-server" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.786432 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.822759 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.872317 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.872419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.872477 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.976306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.976455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.976641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.977093 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:36 crc kubenswrapper[4751]: I0130 22:12:36.977471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.004089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"community-operators-wb8v7\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.134138 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.741378 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:37 crc kubenswrapper[4751]: I0130 22:12:37.940977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerStarted","Data":"5440b0621535fecd6a06b59ed63d2ee9b603afd691e3da53d98973f03fbc2cc2"} Jan 30 22:12:38 crc kubenswrapper[4751]: I0130 22:12:38.953977 4751 generic.go:334] "Generic (PLEG): container finished" podID="b2702e38-6753-43af-9a56-dd00aba1250f" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" exitCode=0 Jan 30 22:12:38 crc kubenswrapper[4751]: I0130 22:12:38.954092 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d"} Jan 30 22:12:39 crc kubenswrapper[4751]: I0130 22:12:39.964873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerStarted","Data":"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e"} Jan 30 22:12:41 crc kubenswrapper[4751]: I0130 22:12:41.987964 4751 generic.go:334] "Generic (PLEG): container finished" podID="b2702e38-6753-43af-9a56-dd00aba1250f" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" exitCode=0 Jan 30 22:12:41 crc kubenswrapper[4751]: I0130 22:12:41.990799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e"} Jan 30 22:12:43 crc kubenswrapper[4751]: I0130 22:12:43.001753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerStarted","Data":"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d"} Jan 30 22:12:43 crc kubenswrapper[4751]: I0130 22:12:43.029495 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wb8v7" podStartSLOduration=3.605691608 podStartE2EDuration="7.029476549s" podCreationTimestamp="2026-01-30 22:12:36 +0000 UTC" firstStartedPulling="2026-01-30 22:12:38.957032584 +0000 UTC m=+3497.702855233" lastFinishedPulling="2026-01-30 22:12:42.380817525 +0000 UTC m=+3501.126640174" observedRunningTime="2026-01-30 22:12:43.021549666 +0000 UTC m=+3501.767372315" watchObservedRunningTime="2026-01-30 22:12:43.029476549 +0000 UTC m=+3501.775299198" Jan 30 22:12:47 crc kubenswrapper[4751]: I0130 22:12:47.135134 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:47 crc kubenswrapper[4751]: I0130 22:12:47.135730 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:47 crc kubenswrapper[4751]: I0130 22:12:47.187270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:48 crc kubenswrapper[4751]: I0130 22:12:48.116852 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:48 crc kubenswrapper[4751]: I0130 22:12:48.428251 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.074276 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wb8v7" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" containerID="cri-o://195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" gracePeriod=2 Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.600003 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710069 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") pod \"b2702e38-6753-43af-9a56-dd00aba1250f\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") pod \"b2702e38-6753-43af-9a56-dd00aba1250f\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710785 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") pod \"b2702e38-6753-43af-9a56-dd00aba1250f\" (UID: \"b2702e38-6753-43af-9a56-dd00aba1250f\") " Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.710911 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities" (OuterVolumeSpecName: "utilities") pod "b2702e38-6753-43af-9a56-dd00aba1250f" (UID: "b2702e38-6753-43af-9a56-dd00aba1250f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.711668 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.716911 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2" (OuterVolumeSpecName: "kube-api-access-2zhm2") pod "b2702e38-6753-43af-9a56-dd00aba1250f" (UID: "b2702e38-6753-43af-9a56-dd00aba1250f"). InnerVolumeSpecName "kube-api-access-2zhm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.753765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2702e38-6753-43af-9a56-dd00aba1250f" (UID: "b2702e38-6753-43af-9a56-dd00aba1250f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.814496 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zhm2\" (UniqueName: \"kubernetes.io/projected/b2702e38-6753-43af-9a56-dd00aba1250f-kube-api-access-2zhm2\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:50 crc kubenswrapper[4751]: I0130 22:12:50.814551 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2702e38-6753-43af-9a56-dd00aba1250f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087224 4751 generic.go:334] "Generic (PLEG): container finished" podID="b2702e38-6753-43af-9a56-dd00aba1250f" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" exitCode=0 Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d"} Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087304 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb8v7" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087340 4751 scope.go:117] "RemoveContainer" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.087311 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb8v7" event={"ID":"b2702e38-6753-43af-9a56-dd00aba1250f","Type":"ContainerDied","Data":"5440b0621535fecd6a06b59ed63d2ee9b603afd691e3da53d98973f03fbc2cc2"} Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.129688 4751 scope.go:117] "RemoveContainer" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.142973 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.156583 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wb8v7"] Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.160258 4751 scope.go:117] "RemoveContainer" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.219536 4751 scope.go:117] "RemoveContainer" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" Jan 30 22:12:51 crc kubenswrapper[4751]: E0130 22:12:51.220117 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d\": container with ID starting with 195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d not found: ID does not exist" containerID="195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220156 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d"} err="failed to get container status \"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d\": rpc error: code = NotFound desc = could not find container \"195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d\": container with ID starting with 195d1f858b0e3cc01f13d0b88e5e1c66c4de1476d6822d7498bfaf61eee3328d not found: ID does not exist" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220181 4751 scope.go:117] "RemoveContainer" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" Jan 30 22:12:51 crc kubenswrapper[4751]: E0130 22:12:51.220609 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e\": container with ID starting with 8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e not found: ID does not exist" containerID="8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220635 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e"} err="failed to get container status \"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e\": rpc error: code = NotFound desc = could not find container \"8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e\": container with ID starting with 8358e6de610d29adca602bcccc39fafed818a6d0788975d56bba84a9fd961e4e not found: ID does not exist" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.220649 4751 scope.go:117] "RemoveContainer" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" Jan 30 22:12:51 crc kubenswrapper[4751]: E0130 22:12:51.220958 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d\": container with ID starting with 91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d not found: ID does not exist" containerID="91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.221016 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d"} err="failed to get container status \"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d\": rpc error: code = NotFound desc = could not find container \"91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d\": container with ID starting with 91e2d29c10894d3ddeaff53f76d5e7bdb749e03be9cc503526e9d3a57733919d not found: ID does not exist" Jan 30 22:12:51 crc kubenswrapper[4751]: I0130 22:12:51.991862 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" path="/var/lib/kubelet/pods/b2702e38-6753-43af-9a56-dd00aba1250f/volumes" Jan 30 22:12:54 crc kubenswrapper[4751]: I0130 22:12:54.127345 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:12:54 crc kubenswrapper[4751]: I0130 22:12:54.128446 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:13:24 crc kubenswrapper[4751]: I0130 22:13:24.126568 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:13:24 crc kubenswrapper[4751]: I0130 22:13:24.127204 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.126739 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.127370 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.127434 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.128402 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.128461 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" gracePeriod=600 Jan 30 22:13:54 crc kubenswrapper[4751]: E0130 22:13:54.263459 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.745997 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" exitCode=0 Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.746043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a"} Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.746075 4751 scope.go:117] "RemoveContainer" containerID="b7204a414860b1d9f7ebaccba0c3c85f4ccaeeed68090f146baeabd5dcaab619" Jan 30 22:13:54 crc kubenswrapper[4751]: I0130 22:13:54.747233 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:13:54 crc kubenswrapper[4751]: E0130 22:13:54.747765 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.041886 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:03 crc kubenswrapper[4751]: E0130 22:14:03.044109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-utilities" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.044130 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-utilities" Jan 30 22:14:03 crc kubenswrapper[4751]: E0130 22:14:03.044179 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.044189 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" Jan 30 22:14:03 crc kubenswrapper[4751]: E0130 22:14:03.044242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-content" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.044252 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="extract-content" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.045102 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2702e38-6753-43af-9a56-dd00aba1250f" containerName="registry-server" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.059803 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.065803 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.236999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.237554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.237726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.339462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.339548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.339634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.340082 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.340159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.362576 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"redhat-marketplace-c27dd\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.398993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:03 crc kubenswrapper[4751]: W0130 22:14:03.926872 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb95606b5_59a9_4df1_8aff_012ba61fe3ed.slice/crio-810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204 WatchSource:0}: Error finding container 810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204: Status 404 returned error can't find the container with id 810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204 Jan 30 22:14:03 crc kubenswrapper[4751]: I0130 22:14:03.934297 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:04 crc kubenswrapper[4751]: I0130 22:14:04.870738 4751 generic.go:334] "Generic (PLEG): container finished" podID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" exitCode=0 Jan 30 22:14:04 crc kubenswrapper[4751]: I0130 22:14:04.870787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05"} Jan 30 22:14:04 crc kubenswrapper[4751]: I0130 22:14:04.870820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerStarted","Data":"810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204"} Jan 30 22:14:05 crc kubenswrapper[4751]: I0130 22:14:05.976704 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:05 crc kubenswrapper[4751]: E0130 22:14:05.977244 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:06 crc kubenswrapper[4751]: I0130 22:14:06.899504 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerStarted","Data":"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a"} Jan 30 22:14:07 crc kubenswrapper[4751]: I0130 22:14:07.912015 4751 generic.go:334] "Generic (PLEG): container finished" podID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" exitCode=0 Jan 30 22:14:07 crc kubenswrapper[4751]: I0130 22:14:07.912070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a"} Jan 30 22:14:09 crc kubenswrapper[4751]: I0130 22:14:09.937461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerStarted","Data":"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e"} Jan 30 22:14:09 crc kubenswrapper[4751]: I0130 22:14:09.961316 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c27dd" podStartSLOduration=3.873274584 podStartE2EDuration="7.961291089s" podCreationTimestamp="2026-01-30 22:14:02 +0000 UTC" firstStartedPulling="2026-01-30 22:14:04.873079454 +0000 UTC m=+3583.618902103" lastFinishedPulling="2026-01-30 22:14:08.961095959 +0000 UTC m=+3587.706918608" observedRunningTime="2026-01-30 22:14:09.956169872 +0000 UTC m=+3588.701992531" watchObservedRunningTime="2026-01-30 22:14:09.961291089 +0000 UTC m=+3588.707113738" Jan 30 22:14:13 crc kubenswrapper[4751]: I0130 22:14:13.399351 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:13 crc kubenswrapper[4751]: I0130 22:14:13.399903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:13 crc kubenswrapper[4751]: I0130 22:14:13.462923 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:19 crc kubenswrapper[4751]: I0130 22:14:19.977981 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:19 crc kubenswrapper[4751]: E0130 22:14:19.979142 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:23 crc kubenswrapper[4751]: I0130 22:14:23.453727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:23 crc kubenswrapper[4751]: I0130 22:14:23.508788 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.081288 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c27dd" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" containerID="cri-o://3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" gracePeriod=2 Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.612744 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.774784 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") pod \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.774926 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") pod \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.774970 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") pod \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\" (UID: \"b95606b5-59a9-4df1-8aff-012ba61fe3ed\") " Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.776025 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities" (OuterVolumeSpecName: "utilities") pod "b95606b5-59a9-4df1-8aff-012ba61fe3ed" (UID: "b95606b5-59a9-4df1-8aff-012ba61fe3ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.783761 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6" (OuterVolumeSpecName: "kube-api-access-hkgj6") pod "b95606b5-59a9-4df1-8aff-012ba61fe3ed" (UID: "b95606b5-59a9-4df1-8aff-012ba61fe3ed"). InnerVolumeSpecName "kube-api-access-hkgj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.807577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b95606b5-59a9-4df1-8aff-012ba61fe3ed" (UID: "b95606b5-59a9-4df1-8aff-012ba61fe3ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.877917 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.877959 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkgj6\" (UniqueName: \"kubernetes.io/projected/b95606b5-59a9-4df1-8aff-012ba61fe3ed-kube-api-access-hkgj6\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:24 crc kubenswrapper[4751]: I0130 22:14:24.877971 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b95606b5-59a9-4df1-8aff-012ba61fe3ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096230 4751 generic.go:334] "Generic (PLEG): container finished" podID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" exitCode=0 Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096289 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c27dd" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096357 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e"} Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c27dd" event={"ID":"b95606b5-59a9-4df1-8aff-012ba61fe3ed","Type":"ContainerDied","Data":"810add7a4a9f5890988c7d980249519e20731df697bfe1df04466ec004b1d204"} Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.096728 4751 scope.go:117] "RemoveContainer" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.124979 4751 scope.go:117] "RemoveContainer" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.140451 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.152647 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c27dd"] Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.164950 4751 scope.go:117] "RemoveContainer" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.239475 4751 scope.go:117] "RemoveContainer" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" Jan 30 22:14:25 crc kubenswrapper[4751]: E0130 22:14:25.239910 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e\": container with ID starting with 3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e not found: ID does not exist" containerID="3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.239957 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e"} err="failed to get container status \"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e\": rpc error: code = NotFound desc = could not find container \"3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e\": container with ID starting with 3e41b5a8ab7c807e7af45d1b8d9f2bf21f61834e2d0999cbcf001c284438342e not found: ID does not exist" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.239986 4751 scope.go:117] "RemoveContainer" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" Jan 30 22:14:25 crc kubenswrapper[4751]: E0130 22:14:25.240240 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a\": container with ID starting with f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a not found: ID does not exist" containerID="f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.240274 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a"} err="failed to get container status \"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a\": rpc error: code = NotFound desc = could not find container \"f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a\": container with ID starting with f6e0dbdbac04ff3499e1728e3cf7185d18e21183eb3203fc3c8f15b2c633049a not found: ID does not exist" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.240290 4751 scope.go:117] "RemoveContainer" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" Jan 30 22:14:25 crc kubenswrapper[4751]: E0130 22:14:25.240803 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05\": container with ID starting with 7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05 not found: ID does not exist" containerID="7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.240825 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05"} err="failed to get container status \"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05\": rpc error: code = NotFound desc = could not find container \"7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05\": container with ID starting with 7b8c200317023ad4ccf9f35bb56c063f562f5cf335a011cb43c010d3dc0cda05 not found: ID does not exist" Jan 30 22:14:25 crc kubenswrapper[4751]: I0130 22:14:25.987148 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" path="/var/lib/kubelet/pods/b95606b5-59a9-4df1-8aff-012ba61fe3ed/volumes" Jan 30 22:14:33 crc kubenswrapper[4751]: I0130 22:14:33.976466 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:33 crc kubenswrapper[4751]: E0130 22:14:33.977434 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:14:48 crc kubenswrapper[4751]: I0130 22:14:48.976183 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:14:48 crc kubenswrapper[4751]: E0130 22:14:48.976942 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.175098 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 22:15:00 crc kubenswrapper[4751]: E0130 22:15:00.176299 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176317 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-utilities" Jan 30 22:15:00 crc kubenswrapper[4751]: E0130 22:15:00.176359 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176369 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4751]: E0130 22:15:00.176383 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176390 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="extract-content" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.176705 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95606b5-59a9-4df1-8aff-012ba61fe3ed" containerName="registry-server" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.177796 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.180082 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.181009 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.188608 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.286986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.287264 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.287350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.390274 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.390736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.391003 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.391436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.397867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.408694 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"collect-profiles-29496855-ncq7m\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.511992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:00 crc kubenswrapper[4751]: I0130 22:15:00.992519 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 22:15:01 crc kubenswrapper[4751]: I0130 22:15:01.467534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerStarted","Data":"17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded"} Jan 30 22:15:01 crc kubenswrapper[4751]: I0130 22:15:01.467602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerStarted","Data":"de23f390426cd77b0e6c6fe987b57369c253861ad994659a19edd6c3ffecb670"} Jan 30 22:15:01 crc kubenswrapper[4751]: I0130 22:15:01.491528 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" podStartSLOduration=1.491506837 podStartE2EDuration="1.491506837s" podCreationTimestamp="2026-01-30 22:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:15:01.482892995 +0000 UTC m=+3640.228715674" watchObservedRunningTime="2026-01-30 22:15:01.491506837 +0000 UTC m=+3640.237329486" Jan 30 22:15:02 crc kubenswrapper[4751]: I0130 22:15:02.478562 4751 generic.go:334] "Generic (PLEG): container finished" podID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerID="17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded" exitCode=0 Jan 30 22:15:02 crc kubenswrapper[4751]: I0130 22:15:02.478669 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerDied","Data":"17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded"} Jan 30 22:15:02 crc kubenswrapper[4751]: I0130 22:15:02.976547 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:02 crc kubenswrapper[4751]: E0130 22:15:02.977006 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.917381 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.975904 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") pod \"3f9671fd-4ee5-4071-8dd4-86a335928d79\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.976223 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") pod \"3f9671fd-4ee5-4071-8dd4-86a335928d79\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.976411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") pod \"3f9671fd-4ee5-4071-8dd4-86a335928d79\" (UID: \"3f9671fd-4ee5-4071-8dd4-86a335928d79\") " Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.976778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f9671fd-4ee5-4071-8dd4-86a335928d79" (UID: "3f9671fd-4ee5-4071-8dd4-86a335928d79"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.977431 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9671fd-4ee5-4071-8dd4-86a335928d79-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.988854 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f9671fd-4ee5-4071-8dd4-86a335928d79" (UID: "3f9671fd-4ee5-4071-8dd4-86a335928d79"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:15:03 crc kubenswrapper[4751]: I0130 22:15:03.988912 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5" (OuterVolumeSpecName: "kube-api-access-d5pz5") pod "3f9671fd-4ee5-4071-8dd4-86a335928d79" (UID: "3f9671fd-4ee5-4071-8dd4-86a335928d79"). InnerVolumeSpecName "kube-api-access-d5pz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.082046 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9671fd-4ee5-4071-8dd4-86a335928d79-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.082707 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5pz5\" (UniqueName: \"kubernetes.io/projected/3f9671fd-4ee5-4071-8dd4-86a335928d79-kube-api-access-d5pz5\") on node \"crc\" DevicePath \"\"" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.498932 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" event={"ID":"3f9671fd-4ee5-4071-8dd4-86a335928d79","Type":"ContainerDied","Data":"de23f390426cd77b0e6c6fe987b57369c253861ad994659a19edd6c3ffecb670"} Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.498973 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de23f390426cd77b0e6c6fe987b57369c253861ad994659a19edd6c3ffecb670" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.499024 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m" Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.569217 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 22:15:04 crc kubenswrapper[4751]: I0130 22:15:04.587208 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496810-rblx8"] Jan 30 22:15:06 crc kubenswrapper[4751]: I0130 22:15:06.001229 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6" path="/var/lib/kubelet/pods/44a7576c-6fb5-4ec9-bec3-7e9c8a1153e6/volumes" Jan 30 22:15:14 crc kubenswrapper[4751]: I0130 22:15:14.976662 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:14 crc kubenswrapper[4751]: E0130 22:15:14.977677 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:28 crc kubenswrapper[4751]: I0130 22:15:28.357087 4751 scope.go:117] "RemoveContainer" containerID="664bdfce98a1e87d41664b73e411b35da3c4e69be04f5631e859fc26af9552e4" Jan 30 22:15:29 crc kubenswrapper[4751]: I0130 22:15:29.976425 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:29 crc kubenswrapper[4751]: E0130 22:15:29.982155 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:44 crc kubenswrapper[4751]: I0130 22:15:44.976601 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:44 crc kubenswrapper[4751]: E0130 22:15:44.977537 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:15:55 crc kubenswrapper[4751]: I0130 22:15:55.978236 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:15:55 crc kubenswrapper[4751]: E0130 22:15:55.979188 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:10 crc kubenswrapper[4751]: I0130 22:16:10.977571 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:10 crc kubenswrapper[4751]: E0130 22:16:10.978382 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:25 crc kubenswrapper[4751]: I0130 22:16:25.976748 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:25 crc kubenswrapper[4751]: E0130 22:16:25.977692 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:39 crc kubenswrapper[4751]: I0130 22:16:39.975895 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:39 crc kubenswrapper[4751]: E0130 22:16:39.976733 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:16:52 crc kubenswrapper[4751]: I0130 22:16:52.976563 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:16:52 crc kubenswrapper[4751]: E0130 22:16:52.977809 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:07 crc kubenswrapper[4751]: I0130 22:17:07.976832 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:07 crc kubenswrapper[4751]: E0130 22:17:07.978658 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:18 crc kubenswrapper[4751]: I0130 22:17:18.976154 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:18 crc kubenswrapper[4751]: E0130 22:17:18.976974 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:29 crc kubenswrapper[4751]: E0130 22:17:29.248243 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:48740->38.102.83.39:41127: write tcp 38.102.83.39:48740->38.102.83.39:41127: write: connection reset by peer Jan 30 22:17:31 crc kubenswrapper[4751]: I0130 22:17:31.986409 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:31 crc kubenswrapper[4751]: E0130 22:17:31.987187 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:44 crc kubenswrapper[4751]: I0130 22:17:44.976608 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:44 crc kubenswrapper[4751]: E0130 22:17:44.977385 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:17:55 crc kubenswrapper[4751]: I0130 22:17:55.976892 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:17:55 crc kubenswrapper[4751]: E0130 22:17:55.977837 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:07 crc kubenswrapper[4751]: I0130 22:18:07.976418 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:07 crc kubenswrapper[4751]: E0130 22:18:07.977459 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:22 crc kubenswrapper[4751]: I0130 22:18:22.975468 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:22 crc kubenswrapper[4751]: E0130 22:18:22.976201 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:34 crc kubenswrapper[4751]: I0130 22:18:34.976705 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:34 crc kubenswrapper[4751]: E0130 22:18:34.977591 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:47 crc kubenswrapper[4751]: I0130 22:18:47.976246 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:47 crc kubenswrapper[4751]: E0130 22:18:47.977116 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:18:58 crc kubenswrapper[4751]: I0130 22:18:58.977320 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:18:59 crc kubenswrapper[4751]: I0130 22:18:59.248985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170"} Jan 30 22:20:42 crc kubenswrapper[4751]: I0130 22:20:42.782146 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerName="galera" probeResult="failure" output="command timed out" Jan 30 22:20:42 crc kubenswrapper[4751]: I0130 22:20:42.784222 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerName="galera" probeResult="failure" output="command timed out" Jan 30 22:21:24 crc kubenswrapper[4751]: I0130 22:21:24.126986 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:21:24 crc kubenswrapper[4751]: I0130 22:21:24.127621 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:21:54 crc kubenswrapper[4751]: I0130 22:21:54.126994 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:21:54 crc kubenswrapper[4751]: I0130 22:21:54.127728 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.126750 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.127430 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.127513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.128841 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:22:24 crc kubenswrapper[4751]: I0130 22:22:24.128950 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170" gracePeriod=600 Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.093421 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170" exitCode=0 Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.093535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170"} Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.094059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3"} Jan 30 22:22:25 crc kubenswrapper[4751]: I0130 22:22:25.094089 4751 scope.go:117] "RemoveContainer" containerID="1593c7462c2ba7d908b40c4c5757da112fe4f68d666dab70fa589f581456402a" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.034948 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:22:51 crc kubenswrapper[4751]: E0130 22:22:51.036139 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerName="collect-profiles" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.036175 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerName="collect-profiles" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.036512 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" containerName="collect-profiles" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.038431 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.046926 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.152356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.152439 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.153165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255638 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.255691 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.280360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"redhat-operators-9gvrx\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.364594 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:22:51 crc kubenswrapper[4751]: I0130 22:22:51.944286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.409576 4751 generic.go:334] "Generic (PLEG): container finished" podID="76270476-be06-47bd-88e3-18ef902b6aba" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" exitCode=0 Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.409732 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074"} Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.410142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerStarted","Data":"14f3d4c75f54bfa75e802384c2da329bc21b119019ff68ea38121fe6f87885fa"} Jan 30 22:22:52 crc kubenswrapper[4751]: I0130 22:22:52.412246 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:22:54 crc kubenswrapper[4751]: I0130 22:22:54.429770 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerStarted","Data":"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20"} Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.427163 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.430517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.457275 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.581997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.582266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.582555 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.685445 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.685689 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.685851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.686067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.686246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.709850 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"certified-operators-9wpq7\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:56 crc kubenswrapper[4751]: I0130 22:22:56.758771 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:22:57 crc kubenswrapper[4751]: I0130 22:22:57.446846 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:22:57 crc kubenswrapper[4751]: I0130 22:22:57.472535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerStarted","Data":"65e1f65905987f1def6751e971ba08713ad56c133e5653d3a02921acceba9c27"} Jan 30 22:22:58 crc kubenswrapper[4751]: I0130 22:22:58.484510 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" exitCode=0 Jan 30 22:22:58 crc kubenswrapper[4751]: I0130 22:22:58.484624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236"} Jan 30 22:22:59 crc kubenswrapper[4751]: I0130 22:22:59.496234 4751 generic.go:334] "Generic (PLEG): container finished" podID="76270476-be06-47bd-88e3-18ef902b6aba" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" exitCode=0 Jan 30 22:22:59 crc kubenswrapper[4751]: I0130 22:22:59.496320 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20"} Jan 30 22:22:59 crc kubenswrapper[4751]: I0130 22:22:59.498601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerStarted","Data":"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616"} Jan 30 22:23:01 crc kubenswrapper[4751]: I0130 22:23:01.519759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerStarted","Data":"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58"} Jan 30 22:23:02 crc kubenswrapper[4751]: I0130 22:23:02.531590 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" exitCode=0 Jan 30 22:23:02 crc kubenswrapper[4751]: I0130 22:23:02.531655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616"} Jan 30 22:23:02 crc kubenswrapper[4751]: I0130 22:23:02.561620 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9gvrx" podStartSLOduration=3.6196913840000002 podStartE2EDuration="11.561599389s" podCreationTimestamp="2026-01-30 22:22:51 +0000 UTC" firstStartedPulling="2026-01-30 22:22:52.411745354 +0000 UTC m=+4111.157568003" lastFinishedPulling="2026-01-30 22:23:00.353653359 +0000 UTC m=+4119.099476008" observedRunningTime="2026-01-30 22:23:01.537041462 +0000 UTC m=+4120.282864101" watchObservedRunningTime="2026-01-30 22:23:02.561599389 +0000 UTC m=+4121.307422038" Jan 30 22:23:03 crc kubenswrapper[4751]: I0130 22:23:03.545829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerStarted","Data":"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553"} Jan 30 22:23:03 crc kubenswrapper[4751]: I0130 22:23:03.574919 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wpq7" podStartSLOduration=2.849440555 podStartE2EDuration="7.57489655s" podCreationTimestamp="2026-01-30 22:22:56 +0000 UTC" firstStartedPulling="2026-01-30 22:22:58.487043686 +0000 UTC m=+4117.232866335" lastFinishedPulling="2026-01-30 22:23:03.212499681 +0000 UTC m=+4121.958322330" observedRunningTime="2026-01-30 22:23:03.567229671 +0000 UTC m=+4122.313052320" watchObservedRunningTime="2026-01-30 22:23:03.57489655 +0000 UTC m=+4122.320719199" Jan 30 22:23:06 crc kubenswrapper[4751]: I0130 22:23:06.759220 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:06 crc kubenswrapper[4751]: I0130 22:23:06.759817 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:07 crc kubenswrapper[4751]: I0130 22:23:07.812576 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9wpq7" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" probeResult="failure" output=< Jan 30 22:23:07 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:23:07 crc kubenswrapper[4751]: > Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.365270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.365844 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.424588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.687433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:11 crc kubenswrapper[4751]: I0130 22:23:11.750191 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:23:13 crc kubenswrapper[4751]: I0130 22:23:13.657769 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9gvrx" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" containerID="cri-o://eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" gracePeriod=2 Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.206168 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.342720 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") pod \"76270476-be06-47bd-88e3-18ef902b6aba\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.342920 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") pod \"76270476-be06-47bd-88e3-18ef902b6aba\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.342968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") pod \"76270476-be06-47bd-88e3-18ef902b6aba\" (UID: \"76270476-be06-47bd-88e3-18ef902b6aba\") " Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.343760 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities" (OuterVolumeSpecName: "utilities") pod "76270476-be06-47bd-88e3-18ef902b6aba" (UID: "76270476-be06-47bd-88e3-18ef902b6aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.350084 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p" (OuterVolumeSpecName: "kube-api-access-67z9p") pod "76270476-be06-47bd-88e3-18ef902b6aba" (UID: "76270476-be06-47bd-88e3-18ef902b6aba"). InnerVolumeSpecName "kube-api-access-67z9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.445451 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67z9p\" (UniqueName: \"kubernetes.io/projected/76270476-be06-47bd-88e3-18ef902b6aba-kube-api-access-67z9p\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.445494 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.467435 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76270476-be06-47bd-88e3-18ef902b6aba" (UID: "76270476-be06-47bd-88e3-18ef902b6aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.547214 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76270476-be06-47bd-88e3-18ef902b6aba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677874 4751 generic.go:334] "Generic (PLEG): container finished" podID="76270476-be06-47bd-88e3-18ef902b6aba" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" exitCode=0 Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677918 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58"} Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gvrx" event={"ID":"76270476-be06-47bd-88e3-18ef902b6aba","Type":"ContainerDied","Data":"14f3d4c75f54bfa75e802384c2da329bc21b119019ff68ea38121fe6f87885fa"} Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677962 4751 scope.go:117] "RemoveContainer" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.677975 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gvrx" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.716266 4751 scope.go:117] "RemoveContainer" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.721612 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.734955 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9gvrx"] Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.744581 4751 scope.go:117] "RemoveContainer" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.801993 4751 scope.go:117] "RemoveContainer" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" Jan 30 22:23:14 crc kubenswrapper[4751]: E0130 22:23:14.803468 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58\": container with ID starting with eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58 not found: ID does not exist" containerID="eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.803511 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58"} err="failed to get container status \"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58\": rpc error: code = NotFound desc = could not find container \"eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58\": container with ID starting with eb612396128661ad878d40feb3d1a11fe1a6c655411da65bc7aca745b49f7c58 not found: ID does not exist" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.803537 4751 scope.go:117] "RemoveContainer" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" Jan 30 22:23:14 crc kubenswrapper[4751]: E0130 22:23:14.804871 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20\": container with ID starting with a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20 not found: ID does not exist" containerID="a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.804914 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20"} err="failed to get container status \"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20\": rpc error: code = NotFound desc = could not find container \"a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20\": container with ID starting with a1fd403f46c5dd4ee5eabfee9e4a057221c22e47797fc69ab1c46dbb2103fd20 not found: ID does not exist" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.804946 4751 scope.go:117] "RemoveContainer" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" Jan 30 22:23:14 crc kubenswrapper[4751]: E0130 22:23:14.805840 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074\": container with ID starting with 3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074 not found: ID does not exist" containerID="3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074" Jan 30 22:23:14 crc kubenswrapper[4751]: I0130 22:23:14.806649 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074"} err="failed to get container status \"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074\": rpc error: code = NotFound desc = could not find container \"3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074\": container with ID starting with 3cf9ff5dc7fec647a2a6c33a06905bfd39d55161adcd06daf1b94edbaf2b6074 not found: ID does not exist" Jan 30 22:23:15 crc kubenswrapper[4751]: I0130 22:23:15.992878 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76270476-be06-47bd-88e3-18ef902b6aba" path="/var/lib/kubelet/pods/76270476-be06-47bd-88e3-18ef902b6aba/volumes" Jan 30 22:23:17 crc kubenswrapper[4751]: I0130 22:23:17.028028 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:17 crc kubenswrapper[4751]: I0130 22:23:17.090907 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:18 crc kubenswrapper[4751]: I0130 22:23:18.065318 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:23:18 crc kubenswrapper[4751]: I0130 22:23:18.717008 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wpq7" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" containerID="cri-o://dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" gracePeriod=2 Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.411993 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.465795 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") pod \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.465934 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") pod \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.466096 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") pod \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\" (UID: \"4a46f48d-5ce2-43ef-adb9-56105e6b01d3\") " Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.466856 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities" (OuterVolumeSpecName: "utilities") pod "4a46f48d-5ce2-43ef-adb9-56105e6b01d3" (UID: "4a46f48d-5ce2-43ef-adb9-56105e6b01d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.472959 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk" (OuterVolumeSpecName: "kube-api-access-dxkmk") pod "4a46f48d-5ce2-43ef-adb9-56105e6b01d3" (UID: "4a46f48d-5ce2-43ef-adb9-56105e6b01d3"). InnerVolumeSpecName "kube-api-access-dxkmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.515290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a46f48d-5ce2-43ef-adb9-56105e6b01d3" (UID: "4a46f48d-5ce2-43ef-adb9-56105e6b01d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.569227 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.569266 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.569280 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxkmk\" (UniqueName: \"kubernetes.io/projected/4a46f48d-5ce2-43ef-adb9-56105e6b01d3-kube-api-access-dxkmk\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.729920 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" exitCode=0 Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.729991 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553"} Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.730020 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpq7" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.730067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpq7" event={"ID":"4a46f48d-5ce2-43ef-adb9-56105e6b01d3","Type":"ContainerDied","Data":"65e1f65905987f1def6751e971ba08713ad56c133e5653d3a02921acceba9c27"} Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.730093 4751 scope.go:117] "RemoveContainer" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.763887 4751 scope.go:117] "RemoveContainer" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.792486 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.797873 4751 scope.go:117] "RemoveContainer" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.807403 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wpq7"] Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.858623 4751 scope.go:117] "RemoveContainer" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" Jan 30 22:23:19 crc kubenswrapper[4751]: E0130 22:23:19.859137 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553\": container with ID starting with dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553 not found: ID does not exist" containerID="dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859170 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553"} err="failed to get container status \"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553\": rpc error: code = NotFound desc = could not find container \"dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553\": container with ID starting with dd27304293a6c3f230008f3348d34ecb3b9536b94ce4210c835c057dbdd64553 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859194 4751 scope.go:117] "RemoveContainer" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" Jan 30 22:23:19 crc kubenswrapper[4751]: E0130 22:23:19.859496 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616\": container with ID starting with bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616 not found: ID does not exist" containerID="bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859541 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616"} err="failed to get container status \"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616\": rpc error: code = NotFound desc = could not find container \"bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616\": container with ID starting with bf200d9f09320a79cc6fdbfd992fb9992198393dafe0d64983903e56f54a3616 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859569 4751 scope.go:117] "RemoveContainer" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" Jan 30 22:23:19 crc kubenswrapper[4751]: E0130 22:23:19.859891 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236\": container with ID starting with 3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236 not found: ID does not exist" containerID="3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.859925 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236"} err="failed to get container status \"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236\": rpc error: code = NotFound desc = could not find container \"3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236\": container with ID starting with 3ae0ae262d04e0f7822e2dafa41890cb070543d03d786466d08b7675efcbf236 not found: ID does not exist" Jan 30 22:23:19 crc kubenswrapper[4751]: I0130 22:23:19.992429 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" path="/var/lib/kubelet/pods/4a46f48d-5ce2-43ef-adb9-56105e6b01d3/volumes" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.208249 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209348 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209362 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209382 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209388 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209405 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209411 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209470 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209478 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="extract-content" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209492 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209499 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: E0130 22:23:42.209512 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209519 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="extract-utilities" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209795 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="76270476-be06-47bd-88e3-18ef902b6aba" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.209826 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a46f48d-5ce2-43ef-adb9-56105e6b01d3" containerName="registry-server" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.211797 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.240916 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.372812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.373162 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.373236 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.475753 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.475803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.475958 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.476178 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.476246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.508285 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"community-operators-l6xlc\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:42 crc kubenswrapper[4751]: I0130 22:23:42.542907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:43 crc kubenswrapper[4751]: I0130 22:23:43.104823 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:43 crc kubenswrapper[4751]: I0130 22:23:43.992176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944"} Jan 30 22:23:43 crc kubenswrapper[4751]: I0130 22:23:43.992786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"aa511afe192cabc0b934a576e66e50e45b1455e28592efbbc96e86b29fa41338"} Jan 30 22:23:45 crc kubenswrapper[4751]: I0130 22:23:45.017051 4751 generic.go:334] "Generic (PLEG): container finished" podID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" exitCode=0 Jan 30 22:23:45 crc kubenswrapper[4751]: I0130 22:23:45.017650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944"} Jan 30 22:23:47 crc kubenswrapper[4751]: I0130 22:23:47.045891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25"} Jan 30 22:23:48 crc kubenswrapper[4751]: I0130 22:23:48.057914 4751 generic.go:334] "Generic (PLEG): container finished" podID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" exitCode=0 Jan 30 22:23:48 crc kubenswrapper[4751]: I0130 22:23:48.057981 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25"} Jan 30 22:23:49 crc kubenswrapper[4751]: I0130 22:23:49.072572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerStarted","Data":"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440"} Jan 30 22:23:49 crc kubenswrapper[4751]: I0130 22:23:49.090723 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l6xlc" podStartSLOduration=3.60181165 podStartE2EDuration="7.090707309s" podCreationTimestamp="2026-01-30 22:23:42 +0000 UTC" firstStartedPulling="2026-01-30 22:23:45.022486558 +0000 UTC m=+4163.768309197" lastFinishedPulling="2026-01-30 22:23:48.511382217 +0000 UTC m=+4167.257204856" observedRunningTime="2026-01-30 22:23:49.088477128 +0000 UTC m=+4167.834299777" watchObservedRunningTime="2026-01-30 22:23:49.090707309 +0000 UTC m=+4167.836529958" Jan 30 22:23:52 crc kubenswrapper[4751]: I0130 22:23:52.543998 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:52 crc kubenswrapper[4751]: I0130 22:23:52.544651 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:52 crc kubenswrapper[4751]: I0130 22:23:52.592033 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:53 crc kubenswrapper[4751]: I0130 22:23:53.473750 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:53 crc kubenswrapper[4751]: I0130 22:23:53.558677 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.138966 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l6xlc" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" containerID="cri-o://7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" gracePeriod=2 Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.685031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.842889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") pod \"4703e2e6-6343-4584-825f-4c35818f3cbd\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.843095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") pod \"4703e2e6-6343-4584-825f-4c35818f3cbd\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.843158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") pod \"4703e2e6-6343-4584-825f-4c35818f3cbd\" (UID: \"4703e2e6-6343-4584-825f-4c35818f3cbd\") " Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.844151 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities" (OuterVolumeSpecName: "utilities") pod "4703e2e6-6343-4584-825f-4c35818f3cbd" (UID: "4703e2e6-6343-4584-825f-4c35818f3cbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.856923 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h" (OuterVolumeSpecName: "kube-api-access-vh94h") pod "4703e2e6-6343-4584-825f-4c35818f3cbd" (UID: "4703e2e6-6343-4584-825f-4c35818f3cbd"). InnerVolumeSpecName "kube-api-access-vh94h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.905398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4703e2e6-6343-4584-825f-4c35818f3cbd" (UID: "4703e2e6-6343-4584-825f-4c35818f3cbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.946416 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.946475 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh94h\" (UniqueName: \"kubernetes.io/projected/4703e2e6-6343-4584-825f-4c35818f3cbd-kube-api-access-vh94h\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:55 crc kubenswrapper[4751]: I0130 22:23:55.946526 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4703e2e6-6343-4584-825f-4c35818f3cbd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154387 4751 generic.go:334] "Generic (PLEG): container finished" podID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" exitCode=0 Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440"} Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154472 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6xlc" event={"ID":"4703e2e6-6343-4584-825f-4c35818f3cbd","Type":"ContainerDied","Data":"aa511afe192cabc0b934a576e66e50e45b1455e28592efbbc96e86b29fa41338"} Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154494 4751 scope.go:117] "RemoveContainer" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.154653 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6xlc" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.188340 4751 scope.go:117] "RemoveContainer" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.191369 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.203071 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l6xlc"] Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.211860 4751 scope.go:117] "RemoveContainer" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.259858 4751 scope.go:117] "RemoveContainer" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" Jan 30 22:23:56 crc kubenswrapper[4751]: E0130 22:23:56.260370 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440\": container with ID starting with 7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440 not found: ID does not exist" containerID="7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260407 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440"} err="failed to get container status \"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440\": rpc error: code = NotFound desc = could not find container \"7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440\": container with ID starting with 7d7eb2b7990fd2e8a5df9c479d070e5720f5579b2efd5f5c15ba843eb3fcf440 not found: ID does not exist" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260434 4751 scope.go:117] "RemoveContainer" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" Jan 30 22:23:56 crc kubenswrapper[4751]: E0130 22:23:56.260758 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25\": container with ID starting with 0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25 not found: ID does not exist" containerID="0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260835 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25"} err="failed to get container status \"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25\": rpc error: code = NotFound desc = could not find container \"0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25\": container with ID starting with 0050840646a31670c2fa94c5002fe700937f65a9e9671334db09df514055aa25 not found: ID does not exist" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.260885 4751 scope.go:117] "RemoveContainer" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" Jan 30 22:23:56 crc kubenswrapper[4751]: E0130 22:23:56.261689 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944\": container with ID starting with 1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944 not found: ID does not exist" containerID="1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944" Jan 30 22:23:56 crc kubenswrapper[4751]: I0130 22:23:56.261713 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944"} err="failed to get container status \"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944\": rpc error: code = NotFound desc = could not find container \"1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944\": container with ID starting with 1fc9ade8a4dcb419db24ca408762269ef02b2c451cd8ed1921dd4a1b1bba5944 not found: ID does not exist" Jan 30 22:23:57 crc kubenswrapper[4751]: I0130 22:23:57.987630 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" path="/var/lib/kubelet/pods/4703e2e6-6343-4584-825f-4c35818f3cbd/volumes" Jan 30 22:24:24 crc kubenswrapper[4751]: I0130 22:24:24.126989 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:24:24 crc kubenswrapper[4751]: I0130 22:24:24.127623 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.802824 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:35 crc kubenswrapper[4751]: E0130 22:24:35.805039 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805064 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" Jan 30 22:24:35 crc kubenswrapper[4751]: E0130 22:24:35.805093 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-content" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805103 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-content" Jan 30 22:24:35 crc kubenswrapper[4751]: E0130 22:24:35.805154 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-utilities" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805165 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="extract-utilities" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.805471 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4703e2e6-6343-4584-825f-4c35818f3cbd" containerName="registry-server" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.807552 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.824209 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.955469 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.956085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:35 crc kubenswrapper[4751]: I0130 22:24:35.956243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.059176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.059362 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.059457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.060202 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.060361 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.088475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"redhat-marketplace-n927t\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.132168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:36 crc kubenswrapper[4751]: I0130 22:24:36.694177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:37 crc kubenswrapper[4751]: I0130 22:24:37.602987 4751 generic.go:334] "Generic (PLEG): container finished" podID="56dff38b-859f-48c6-8b01-42dfaf948555" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" exitCode=0 Jan 30 22:24:37 crc kubenswrapper[4751]: I0130 22:24:37.603106 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573"} Jan 30 22:24:37 crc kubenswrapper[4751]: I0130 22:24:37.603352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerStarted","Data":"9d33d52f32a6940b5421d9ec5d883dcef165816d855dde7fe6395ca3cff7a153"} Jan 30 22:24:39 crc kubenswrapper[4751]: I0130 22:24:39.630097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerStarted","Data":"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1"} Jan 30 22:24:40 crc kubenswrapper[4751]: I0130 22:24:40.640217 4751 generic.go:334] "Generic (PLEG): container finished" podID="56dff38b-859f-48c6-8b01-42dfaf948555" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" exitCode=0 Jan 30 22:24:40 crc kubenswrapper[4751]: I0130 22:24:40.640348 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1"} Jan 30 22:24:41 crc kubenswrapper[4751]: I0130 22:24:41.652747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerStarted","Data":"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233"} Jan 30 22:24:41 crc kubenswrapper[4751]: I0130 22:24:41.682374 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n927t" podStartSLOduration=3.194051694 podStartE2EDuration="6.682353627s" podCreationTimestamp="2026-01-30 22:24:35 +0000 UTC" firstStartedPulling="2026-01-30 22:24:37.604999808 +0000 UTC m=+4216.350822457" lastFinishedPulling="2026-01-30 22:24:41.093301741 +0000 UTC m=+4219.839124390" observedRunningTime="2026-01-30 22:24:41.671162123 +0000 UTC m=+4220.416984772" watchObservedRunningTime="2026-01-30 22:24:41.682353627 +0000 UTC m=+4220.428176296" Jan 30 22:24:46 crc kubenswrapper[4751]: I0130 22:24:46.132529 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:46 crc kubenswrapper[4751]: I0130 22:24:46.133234 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:46 crc kubenswrapper[4751]: I0130 22:24:46.202716 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:47 crc kubenswrapper[4751]: I0130 22:24:47.624996 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:47 crc kubenswrapper[4751]: I0130 22:24:47.677548 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:48 crc kubenswrapper[4751]: I0130 22:24:48.720615 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n927t" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" containerID="cri-o://b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" gracePeriod=2 Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.606828 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.696567 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") pod \"56dff38b-859f-48c6-8b01-42dfaf948555\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.696910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") pod \"56dff38b-859f-48c6-8b01-42dfaf948555\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.696987 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") pod \"56dff38b-859f-48c6-8b01-42dfaf948555\" (UID: \"56dff38b-859f-48c6-8b01-42dfaf948555\") " Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.697879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities" (OuterVolumeSpecName: "utilities") pod "56dff38b-859f-48c6-8b01-42dfaf948555" (UID: "56dff38b-859f-48c6-8b01-42dfaf948555"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.703398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8" (OuterVolumeSpecName: "kube-api-access-v4xp8") pod "56dff38b-859f-48c6-8b01-42dfaf948555" (UID: "56dff38b-859f-48c6-8b01-42dfaf948555"). InnerVolumeSpecName "kube-api-access-v4xp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736159 4751 generic.go:334] "Generic (PLEG): container finished" podID="56dff38b-859f-48c6-8b01-42dfaf948555" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" exitCode=0 Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736217 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233"} Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736252 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n927t" event={"ID":"56dff38b-859f-48c6-8b01-42dfaf948555","Type":"ContainerDied","Data":"9d33d52f32a6940b5421d9ec5d883dcef165816d855dde7fe6395ca3cff7a153"} Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736272 4751 scope.go:117] "RemoveContainer" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.736291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n927t" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.787674 4751 scope.go:117] "RemoveContainer" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.800344 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.800629 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4xp8\" (UniqueName: \"kubernetes.io/projected/56dff38b-859f-48c6-8b01-42dfaf948555-kube-api-access-v4xp8\") on node \"crc\" DevicePath \"\"" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.827497 4751 scope.go:117] "RemoveContainer" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.890362 4751 scope.go:117] "RemoveContainer" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" Jan 30 22:24:49 crc kubenswrapper[4751]: E0130 22:24:49.890883 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233\": container with ID starting with b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233 not found: ID does not exist" containerID="b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.890932 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233"} err="failed to get container status \"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233\": rpc error: code = NotFound desc = could not find container \"b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233\": container with ID starting with b55ff9fc70706f931e0e7fa101f2360fa8e1fdf7858db58014c9d3df8fac5233 not found: ID does not exist" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.890960 4751 scope.go:117] "RemoveContainer" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" Jan 30 22:24:49 crc kubenswrapper[4751]: E0130 22:24:49.891289 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1\": container with ID starting with 57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1 not found: ID does not exist" containerID="57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.891348 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1"} err="failed to get container status \"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1\": rpc error: code = NotFound desc = could not find container \"57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1\": container with ID starting with 57ffa68628c029a3bdcb64f02c8e9c038f9002e18cb2cdf74eab05226d0a95e1 not found: ID does not exist" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.891379 4751 scope.go:117] "RemoveContainer" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" Jan 30 22:24:49 crc kubenswrapper[4751]: E0130 22:24:49.891714 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573\": container with ID starting with 4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573 not found: ID does not exist" containerID="4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573" Jan 30 22:24:49 crc kubenswrapper[4751]: I0130 22:24:49.891744 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573"} err="failed to get container status \"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573\": rpc error: code = NotFound desc = could not find container \"4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573\": container with ID starting with 4567d845f69832e0d45827c27b631b5fd4466f063519f839ed57e88a1d72e573 not found: ID does not exist" Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.075614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56dff38b-859f-48c6-8b01-42dfaf948555" (UID: "56dff38b-859f-48c6-8b01-42dfaf948555"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.107690 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56dff38b-859f-48c6-8b01-42dfaf948555-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.373703 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:50 crc kubenswrapper[4751]: I0130 22:24:50.385773 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n927t"] Jan 30 22:24:52 crc kubenswrapper[4751]: I0130 22:24:52.008986 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" path="/var/lib/kubelet/pods/56dff38b-859f-48c6-8b01-42dfaf948555/volumes" Jan 30 22:24:54 crc kubenswrapper[4751]: I0130 22:24:54.127213 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:24:54 crc kubenswrapper[4751]: I0130 22:24:54.127812 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.126883 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.127465 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.127530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.128397 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:25:24 crc kubenswrapper[4751]: I0130 22:25:24.128452 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" gracePeriod=600 Jan 30 22:25:24 crc kubenswrapper[4751]: E0130 22:25:24.255407 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.146741 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" exitCode=0 Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.146812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3"} Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.147119 4751 scope.go:117] "RemoveContainer" containerID="efd99c7f1a974f0acdc1ce10091a0b2ee7636478bf31291cff8918dfb9474170" Jan 30 22:25:25 crc kubenswrapper[4751]: I0130 22:25:25.147962 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:25:25 crc kubenswrapper[4751]: E0130 22:25:25.148318 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:25:38 crc kubenswrapper[4751]: I0130 22:25:38.976092 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:25:38 crc kubenswrapper[4751]: E0130 22:25:38.977071 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:25:51 crc kubenswrapper[4751]: I0130 22:25:51.983934 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:25:51 crc kubenswrapper[4751]: E0130 22:25:51.984740 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:02 crc kubenswrapper[4751]: I0130 22:26:02.977532 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:02 crc kubenswrapper[4751]: E0130 22:26:02.978491 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:15 crc kubenswrapper[4751]: I0130 22:26:15.976419 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:15 crc kubenswrapper[4751]: E0130 22:26:15.977160 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:28 crc kubenswrapper[4751]: I0130 22:26:28.976468 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:28 crc kubenswrapper[4751]: E0130 22:26:28.977342 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:40 crc kubenswrapper[4751]: I0130 22:26:40.976631 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:40 crc kubenswrapper[4751]: E0130 22:26:40.977630 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:26:51 crc kubenswrapper[4751]: I0130 22:26:51.976523 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:26:51 crc kubenswrapper[4751]: E0130 22:26:51.977326 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:03 crc kubenswrapper[4751]: I0130 22:27:03.976747 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:03 crc kubenswrapper[4751]: E0130 22:27:03.977628 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:17 crc kubenswrapper[4751]: I0130 22:27:17.977809 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:17 crc kubenswrapper[4751]: E0130 22:27:17.978614 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:22 crc kubenswrapper[4751]: E0130 22:27:22.291056 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:51282->38.102.83.39:41127: write tcp 38.102.83.39:51282->38.102.83.39:41127: write: broken pipe Jan 30 22:27:32 crc kubenswrapper[4751]: I0130 22:27:32.976191 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:32 crc kubenswrapper[4751]: E0130 22:27:32.977126 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:45 crc kubenswrapper[4751]: I0130 22:27:45.977136 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:45 crc kubenswrapper[4751]: E0130 22:27:45.977843 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:27:56 crc kubenswrapper[4751]: I0130 22:27:56.976376 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:27:56 crc kubenswrapper[4751]: E0130 22:27:56.977306 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:07 crc kubenswrapper[4751]: I0130 22:28:07.977363 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:07 crc kubenswrapper[4751]: E0130 22:28:07.978311 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:18 crc kubenswrapper[4751]: I0130 22:28:18.975962 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:18 crc kubenswrapper[4751]: E0130 22:28:18.976665 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:29 crc kubenswrapper[4751]: I0130 22:28:29.978317 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:29 crc kubenswrapper[4751]: E0130 22:28:29.979644 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:40 crc kubenswrapper[4751]: I0130 22:28:40.977208 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:40 crc kubenswrapper[4751]: E0130 22:28:40.978363 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:28:52 crc kubenswrapper[4751]: I0130 22:28:52.975943 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:28:52 crc kubenswrapper[4751]: E0130 22:28:52.976995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:04 crc kubenswrapper[4751]: I0130 22:29:04.975761 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:04 crc kubenswrapper[4751]: E0130 22:29:04.976686 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:15 crc kubenswrapper[4751]: I0130 22:29:15.976240 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:15 crc kubenswrapper[4751]: E0130 22:29:15.977013 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:26 crc kubenswrapper[4751]: I0130 22:29:26.976425 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:26 crc kubenswrapper[4751]: E0130 22:29:26.977268 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:38 crc kubenswrapper[4751]: I0130 22:29:38.976417 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:38 crc kubenswrapper[4751]: E0130 22:29:38.977353 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:29:52 crc kubenswrapper[4751]: I0130 22:29:52.977139 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:29:52 crc kubenswrapper[4751]: E0130 22:29:52.977998 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.159025 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl"] Jan 30 22:30:00 crc kubenswrapper[4751]: E0130 22:30:00.161165 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161180 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-content" Jan 30 22:30:00 crc kubenswrapper[4751]: E0130 22:30:00.161205 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161211 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4751]: E0130 22:30:00.161243 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161249 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="extract-utilities" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.161520 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dff38b-859f-48c6-8b01-42dfaf948555" containerName="registry-server" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.162403 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.164920 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.178259 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.178616 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl"] Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.205475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.205533 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.205795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.307829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.308280 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.308341 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.309274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.327016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.329357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"collect-profiles-29496870-c72gl\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.488394 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:00 crc kubenswrapper[4751]: I0130 22:30:00.994528 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl"] Jan 30 22:30:01 crc kubenswrapper[4751]: I0130 22:30:01.683265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerStarted","Data":"24be1af0168ea685c3b9ac7cebe33603bdc5a928d7d9a415eddd1b85a3a97b25"} Jan 30 22:30:01 crc kubenswrapper[4751]: I0130 22:30:01.683319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerStarted","Data":"db70346d1789d2c184fbb79f4672d3561cd1e382d2c767cd3d66034d0575acb3"} Jan 30 22:30:01 crc kubenswrapper[4751]: I0130 22:30:01.706822 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" podStartSLOduration=1.706802524 podStartE2EDuration="1.706802524s" podCreationTimestamp="2026-01-30 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 22:30:01.706653209 +0000 UTC m=+4540.452475858" watchObservedRunningTime="2026-01-30 22:30:01.706802524 +0000 UTC m=+4540.452625173" Jan 30 22:30:02 crc kubenswrapper[4751]: I0130 22:30:02.694904 4751 generic.go:334] "Generic (PLEG): container finished" podID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerID="24be1af0168ea685c3b9ac7cebe33603bdc5a928d7d9a415eddd1b85a3a97b25" exitCode=0 Jan 30 22:30:02 crc kubenswrapper[4751]: I0130 22:30:02.695299 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerDied","Data":"24be1af0168ea685c3b9ac7cebe33603bdc5a928d7d9a415eddd1b85a3a97b25"} Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.106142 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.200702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") pod \"a015a029-77ef-48b8-870d-c6e5381cbbbf\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.200791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") pod \"a015a029-77ef-48b8-870d-c6e5381cbbbf\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.201011 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") pod \"a015a029-77ef-48b8-870d-c6e5381cbbbf\" (UID: \"a015a029-77ef-48b8-870d-c6e5381cbbbf\") " Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.201706 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "a015a029-77ef-48b8-870d-c6e5381cbbbf" (UID: "a015a029-77ef-48b8-870d-c6e5381cbbbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.202087 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a015a029-77ef-48b8-870d-c6e5381cbbbf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.207018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l" (OuterVolumeSpecName: "kube-api-access-8js4l") pod "a015a029-77ef-48b8-870d-c6e5381cbbbf" (UID: "a015a029-77ef-48b8-870d-c6e5381cbbbf"). InnerVolumeSpecName "kube-api-access-8js4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.214551 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a015a029-77ef-48b8-870d-c6e5381cbbbf" (UID: "a015a029-77ef-48b8-870d-c6e5381cbbbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.303911 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a015a029-77ef-48b8-870d-c6e5381cbbbf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.304154 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8js4l\" (UniqueName: \"kubernetes.io/projected/a015a029-77ef-48b8-870d-c6e5381cbbbf-kube-api-access-8js4l\") on node \"crc\" DevicePath \"\"" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.716053 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" event={"ID":"a015a029-77ef-48b8-870d-c6e5381cbbbf","Type":"ContainerDied","Data":"db70346d1789d2c184fbb79f4672d3561cd1e382d2c767cd3d66034d0575acb3"} Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.716689 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db70346d1789d2c184fbb79f4672d3561cd1e382d2c767cd3d66034d0575acb3" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.716165 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496870-c72gl" Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.790388 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 22:30:04 crc kubenswrapper[4751]: I0130 22:30:04.803236 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496825-qqpqc"] Jan 30 22:30:05 crc kubenswrapper[4751]: I0130 22:30:05.976306 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:30:05 crc kubenswrapper[4751]: E0130 22:30:05.976862 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:30:05 crc kubenswrapper[4751]: I0130 22:30:05.994551 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a5fa77-b23e-417a-9854-929675be1c58" path="/var/lib/kubelet/pods/60a5fa77-b23e-417a-9854-929675be1c58/volumes" Jan 30 22:30:17 crc kubenswrapper[4751]: I0130 22:30:17.975846 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:30:17 crc kubenswrapper[4751]: E0130 22:30:17.976730 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:30:28 crc kubenswrapper[4751]: I0130 22:30:28.852979 4751 scope.go:117] "RemoveContainer" containerID="a925b908937d8dd9436a4992fc297b882d7c680a8bb02a09739b64f2a561f95a" Jan 30 22:30:29 crc kubenswrapper[4751]: I0130 22:30:29.976272 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:30:30 crc kubenswrapper[4751]: I0130 22:30:30.999054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395"} Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.954915 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 22:31:45 crc kubenswrapper[4751]: E0130 22:31:45.955862 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerName="collect-profiles" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.955877 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerName="collect-profiles" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.956152 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a015a029-77ef-48b8-870d-c6e5381cbbbf" containerName="collect-profiles" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.956919 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.962076 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.963246 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.963604 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dvc9j" Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.968191 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 22:31:45 crc kubenswrapper[4751]: I0130 22:31:45.970782 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.040561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.040823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.040930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.142986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143165 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.143419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.144525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.144992 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.149478 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245782 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245828 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.245907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.246318 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.246712 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.247284 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.249422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.249526 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.266409 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.279979 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.292427 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.801548 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.801875 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:31:46 crc kubenswrapper[4751]: I0130 22:31:46.837494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerStarted","Data":"01b3d137ed8bb5af449d591205e958b031f5ad78d5d86311bd69b7e07f52d896"} Jan 30 22:32:24 crc kubenswrapper[4751]: E0130 22:32:24.902697 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 22:32:24 crc kubenswrapper[4751]: E0130 22:32:24.905147 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnm9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(053bddc4-b1a1-4951-af33-6230acd3ee0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 22:32:24 crc kubenswrapper[4751]: E0130 22:32:24.906375 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" Jan 30 22:32:25 crc kubenswrapper[4751]: E0130 22:32:25.361234 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" Jan 30 22:32:40 crc kubenswrapper[4751]: I0130 22:32:40.430090 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 22:32:42 crc kubenswrapper[4751]: I0130 22:32:42.592427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerStarted","Data":"4a57649ebddefdd6cfb7979e8b07856c36ff49932c8103c4cfd06fb309f09454"} Jan 30 22:32:42 crc kubenswrapper[4751]: I0130 22:32:42.613027 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.987366335 podStartE2EDuration="58.613003293s" podCreationTimestamp="2026-01-30 22:31:44 +0000 UTC" firstStartedPulling="2026-01-30 22:31:46.801487424 +0000 UTC m=+4645.547310103" lastFinishedPulling="2026-01-30 22:32:40.427124412 +0000 UTC m=+4699.172947061" observedRunningTime="2026-01-30 22:32:42.608920992 +0000 UTC m=+4701.354743641" watchObservedRunningTime="2026-01-30 22:32:42.613003293 +0000 UTC m=+4701.358825982" Jan 30 22:32:54 crc kubenswrapper[4751]: I0130 22:32:54.126726 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:32:54 crc kubenswrapper[4751]: I0130 22:32:54.127174 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:33:24 crc kubenswrapper[4751]: I0130 22:33:24.126674 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:33:24 crc kubenswrapper[4751]: I0130 22:33:24.127281 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.204779 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.228340 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.326479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.338850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.352041 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.352392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.455001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.455275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.455408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.460011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.461495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.491888 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"redhat-operators-mmdjh\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:40 crc kubenswrapper[4751]: I0130 22:33:40.567857 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:33:41 crc kubenswrapper[4751]: I0130 22:33:41.875705 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:33:42 crc kubenswrapper[4751]: I0130 22:33:42.262233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be"} Jan 30 22:33:42 crc kubenswrapper[4751]: I0130 22:33:42.262700 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" exitCode=0 Jan 30 22:33:42 crc kubenswrapper[4751]: I0130 22:33:42.263042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerStarted","Data":"f56f6afdb1c34b3fd6832f87715462fd3fb2f665ac0ee6d4689e6af74b5ac7ce"} Jan 30 22:33:44 crc kubenswrapper[4751]: I0130 22:33:44.287532 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerStarted","Data":"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12"} Jan 30 22:33:50 crc kubenswrapper[4751]: I0130 22:33:50.353927 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" exitCode=0 Jan 30 22:33:50 crc kubenswrapper[4751]: I0130 22:33:50.354308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12"} Jan 30 22:33:51 crc kubenswrapper[4751]: I0130 22:33:51.368474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerStarted","Data":"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45"} Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.127275 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.127973 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.128030 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.129046 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.130975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395" gracePeriod=600 Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.399391 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395" exitCode=0 Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.399526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395"} Jan 30 22:33:54 crc kubenswrapper[4751]: I0130 22:33:54.399786 4751 scope.go:117] "RemoveContainer" containerID="f08d82374d50d2858648676bc3c3c1b7e1b15c6d5ea8534a22318db20fcfaab3" Jan 30 22:33:55 crc kubenswrapper[4751]: I0130 22:33:55.417091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd"} Jan 30 22:33:55 crc kubenswrapper[4751]: I0130 22:33:55.460901 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mmdjh" podStartSLOduration=6.888034032 podStartE2EDuration="15.460876277s" podCreationTimestamp="2026-01-30 22:33:40 +0000 UTC" firstStartedPulling="2026-01-30 22:33:42.265154151 +0000 UTC m=+4761.010976800" lastFinishedPulling="2026-01-30 22:33:50.837996396 +0000 UTC m=+4769.583819045" observedRunningTime="2026-01-30 22:33:51.39308498 +0000 UTC m=+4770.138907629" watchObservedRunningTime="2026-01-30 22:33:55.460876277 +0000 UTC m=+4774.206698926" Jan 30 22:34:00 crc kubenswrapper[4751]: I0130 22:34:00.570221 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:34:00 crc kubenswrapper[4751]: I0130 22:34:00.570832 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:34:01 crc kubenswrapper[4751]: I0130 22:34:01.620021 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:01 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:01 crc kubenswrapper[4751]: > Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.144709 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.147572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.184071 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.205464 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-catalog-content\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.205627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-utilities\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.205650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7hns\" (UniqueName: \"kubernetes.io/projected/fd9d691f-2785-4248-80d8-903f36ff7f1f-kube-api-access-p7hns\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.307501 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-utilities\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.307561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7hns\" (UniqueName: \"kubernetes.io/projected/fd9d691f-2785-4248-80d8-903f36ff7f1f-kube-api-access-p7hns\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.307837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-catalog-content\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.309237 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-utilities\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.309729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9d691f-2785-4248-80d8-903f36ff7f1f-catalog-content\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:02 crc kubenswrapper[4751]: I0130 22:34:02.784760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7hns\" (UniqueName: \"kubernetes.io/projected/fd9d691f-2785-4248-80d8-903f36ff7f1f-kube-api-access-p7hns\") pod \"certified-operators-pscx6\" (UID: \"fd9d691f-2785-4248-80d8-903f36ff7f1f\") " pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:03 crc kubenswrapper[4751]: I0130 22:34:03.077526 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:03 crc kubenswrapper[4751]: I0130 22:34:03.954363 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:04 crc kubenswrapper[4751]: I0130 22:34:04.523856 4751 generic.go:334] "Generic (PLEG): container finished" podID="fd9d691f-2785-4248-80d8-903f36ff7f1f" containerID="6648c3460f5d5803211a53bb4c08c8569982bf978ff508c811d83df2f6906ec9" exitCode=0 Jan 30 22:34:04 crc kubenswrapper[4751]: I0130 22:34:04.523924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerDied","Data":"6648c3460f5d5803211a53bb4c08c8569982bf978ff508c811d83df2f6906ec9"} Jan 30 22:34:04 crc kubenswrapper[4751]: I0130 22:34:04.524146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerStarted","Data":"6ad69d79162c25ac3263a83274772fdd49a6d71406dc5eefadff25bccf952620"} Jan 30 22:34:10 crc kubenswrapper[4751]: I0130 22:34:10.601758 4751 generic.go:334] "Generic (PLEG): container finished" podID="fd9d691f-2785-4248-80d8-903f36ff7f1f" containerID="2e43ad21a17b63ad8acc030f78f472adb49950c157beb3129e02bf74fba4aeaf" exitCode=0 Jan 30 22:34:10 crc kubenswrapper[4751]: I0130 22:34:10.601882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerDied","Data":"2e43ad21a17b63ad8acc030f78f472adb49950c157beb3129e02bf74fba4aeaf"} Jan 30 22:34:11 crc kubenswrapper[4751]: I0130 22:34:11.615973 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pscx6" event={"ID":"fd9d691f-2785-4248-80d8-903f36ff7f1f","Type":"ContainerStarted","Data":"6479c675db0b9a47e314dc95f0bdf6f15cc383d8a737d250a2d08ecf07b4e508"} Jan 30 22:34:11 crc kubenswrapper[4751]: I0130 22:34:11.636226 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:11 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:11 crc kubenswrapper[4751]: > Jan 30 22:34:11 crc kubenswrapper[4751]: I0130 22:34:11.645241 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pscx6" podStartSLOduration=2.851357857 podStartE2EDuration="9.645209796s" podCreationTimestamp="2026-01-30 22:34:02 +0000 UTC" firstStartedPulling="2026-01-30 22:34:04.525674855 +0000 UTC m=+4783.271497504" lastFinishedPulling="2026-01-30 22:34:11.319526794 +0000 UTC m=+4790.065349443" observedRunningTime="2026-01-30 22:34:11.630646622 +0000 UTC m=+4790.376469291" watchObservedRunningTime="2026-01-30 22:34:11.645209796 +0000 UTC m=+4790.391032445" Jan 30 22:34:13 crc kubenswrapper[4751]: I0130 22:34:13.078467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:13 crc kubenswrapper[4751]: I0130 22:34:13.078818 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:13 crc kubenswrapper[4751]: I0130 22:34:13.133226 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:21 crc kubenswrapper[4751]: I0130 22:34:21.622720 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:21 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:21 crc kubenswrapper[4751]: > Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.153889 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pscx6" Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.316076 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pscx6"] Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.370392 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 22:34:23 crc kubenswrapper[4751]: I0130 22:34:23.742911 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kcjb7" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" containerID="cri-o://866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac" gracePeriod=2 Jan 30 22:34:24 crc kubenswrapper[4751]: I0130 22:34:24.753630 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerID="866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac" exitCode=0 Jan 30 22:34:24 crc kubenswrapper[4751]: I0130 22:34:24.753718 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac"} Jan 30 22:34:24 crc kubenswrapper[4751]: I0130 22:34:24.887834 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.072608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") pod \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.073114 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") pod \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.073348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") pod \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\" (UID: \"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776\") " Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.076157 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities" (OuterVolumeSpecName: "utilities") pod "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" (UID: "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.095067 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7" (OuterVolumeSpecName: "kube-api-access-qbtz7") pod "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" (UID: "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776"). InnerVolumeSpecName "kube-api-access-qbtz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.176219 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.176262 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbtz7\" (UniqueName: \"kubernetes.io/projected/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-kube-api-access-qbtz7\") on node \"crc\" DevicePath \"\"" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.176723 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" (UID: "e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.277770 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.764744 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcjb7" event={"ID":"e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776","Type":"ContainerDied","Data":"95551a4e245bf6ad27a1b26cb62b9724a5be7406b4d6229b016888f12ca7d6d4"} Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.764992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcjb7" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.765763 4751 scope.go:117] "RemoveContainer" containerID="866096c9e0962b4450aeafceff9a6e799efcc8a53a6c4825b431141eedcb2cac" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.805851 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.816347 4751 scope.go:117] "RemoveContainer" containerID="b9102fc49cd164d867074d03c63d8593be70d6d663c1f645db5a7cf70fe3ec65" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.817349 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kcjb7"] Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.848822 4751 scope.go:117] "RemoveContainer" containerID="f0ff7f17884024cadb59819e4114f64f13e4c4199dcbe665c88b3d9400eb196b" Jan 30 22:34:25 crc kubenswrapper[4751]: I0130 22:34:25.993737 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" path="/var/lib/kubelet/pods/e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776/volumes" Jan 30 22:34:31 crc kubenswrapper[4751]: I0130 22:34:31.626695 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:31 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:31 crc kubenswrapper[4751]: > Jan 30 22:34:41 crc kubenswrapper[4751]: I0130 22:34:41.623923 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:41 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:41 crc kubenswrapper[4751]: > Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.041399 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:34:44 crc kubenswrapper[4751]: E0130 22:34:44.048888 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.048942 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" Jan 30 22:34:44 crc kubenswrapper[4751]: E0130 22:34:44.048976 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-utilities" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.049006 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-utilities" Jan 30 22:34:44 crc kubenswrapper[4751]: E0130 22:34:44.049029 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-content" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.049051 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="extract-content" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.050969 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c60341-0c3e-4be5-a2a2-e5a4ed9b5776" containerName="registry-server" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.063842 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.142846 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.143638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.143714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.245734 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.246818 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.246989 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.247145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.259822 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.260424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.309009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"redhat-marketplace-2jv98\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:44 crc kubenswrapper[4751]: I0130 22:34:44.422073 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:45 crc kubenswrapper[4751]: I0130 22:34:45.844704 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:34:45 crc kubenswrapper[4751]: I0130 22:34:45.998922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerStarted","Data":"3e4db97d3dab806bbea89c7b116ea6482682a310a5e96ab17c7a34a929e79d26"} Jan 30 22:34:47 crc kubenswrapper[4751]: I0130 22:34:47.012440 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"b7d035e0c8a1729f7bd553455a75aa5d582ba92d231e9e7e3235b4066f573ef7"} Jan 30 22:34:47 crc kubenswrapper[4751]: I0130 22:34:47.014754 4751 generic.go:334] "Generic (PLEG): container finished" podID="328960d4-cdf9-4134-b966-af48db38c682" containerID="b7d035e0c8a1729f7bd553455a75aa5d582ba92d231e9e7e3235b4066f573ef7" exitCode=0 Jan 30 22:34:49 crc kubenswrapper[4751]: I0130 22:34:49.037059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerStarted","Data":"84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d"} Jan 30 22:34:50 crc kubenswrapper[4751]: I0130 22:34:50.049390 4751 generic.go:334] "Generic (PLEG): container finished" podID="328960d4-cdf9-4134-b966-af48db38c682" containerID="84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d" exitCode=0 Jan 30 22:34:50 crc kubenswrapper[4751]: I0130 22:34:50.049443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d"} Jan 30 22:34:51 crc kubenswrapper[4751]: I0130 22:34:51.065699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerStarted","Data":"17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956"} Jan 30 22:34:51 crc kubenswrapper[4751]: I0130 22:34:51.110831 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jv98" podStartSLOduration=4.5105243139999995 podStartE2EDuration="8.106373584s" podCreationTimestamp="2026-01-30 22:34:43 +0000 UTC" firstStartedPulling="2026-01-30 22:34:47.016176878 +0000 UTC m=+4825.761999537" lastFinishedPulling="2026-01-30 22:34:50.612026158 +0000 UTC m=+4829.357848807" observedRunningTime="2026-01-30 22:34:51.100152405 +0000 UTC m=+4829.845975084" watchObservedRunningTime="2026-01-30 22:34:51.106373584 +0000 UTC m=+4829.852196253" Jan 30 22:34:51 crc kubenswrapper[4751]: I0130 22:34:51.688811 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:51 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:51 crc kubenswrapper[4751]: > Jan 30 22:34:54 crc kubenswrapper[4751]: I0130 22:34:54.423525 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:54 crc kubenswrapper[4751]: I0130 22:34:54.424096 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:34:55 crc kubenswrapper[4751]: I0130 22:34:55.481981 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2jv98" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" probeResult="failure" output=< Jan 30 22:34:55 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:34:55 crc kubenswrapper[4751]: > Jan 30 22:35:01 crc kubenswrapper[4751]: I0130 22:35:01.636392 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" probeResult="failure" output=< Jan 30 22:35:01 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:35:01 crc kubenswrapper[4751]: > Jan 30 22:35:04 crc kubenswrapper[4751]: I0130 22:35:04.494524 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:04 crc kubenswrapper[4751]: I0130 22:35:04.553467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:04 crc kubenswrapper[4751]: I0130 22:35:04.795659 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:35:06 crc kubenswrapper[4751]: I0130 22:35:06.225846 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jv98" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" containerID="cri-o://17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956" gracePeriod=2 Jan 30 22:35:07 crc kubenswrapper[4751]: I0130 22:35:07.276994 4751 generic.go:334] "Generic (PLEG): container finished" podID="328960d4-cdf9-4134-b966-af48db38c682" containerID="17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956" exitCode=0 Jan 30 22:35:07 crc kubenswrapper[4751]: I0130 22:35:07.277700 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956"} Jan 30 22:35:07 crc kubenswrapper[4751]: I0130 22:35:07.926764 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.000584 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") pod \"328960d4-cdf9-4134-b966-af48db38c682\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.000992 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") pod \"328960d4-cdf9-4134-b966-af48db38c682\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.001063 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") pod \"328960d4-cdf9-4134-b966-af48db38c682\" (UID: \"328960d4-cdf9-4134-b966-af48db38c682\") " Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.036785 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities" (OuterVolumeSpecName: "utilities") pod "328960d4-cdf9-4134-b966-af48db38c682" (UID: "328960d4-cdf9-4134-b966-af48db38c682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.075073 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp" (OuterVolumeSpecName: "kube-api-access-nzzxp") pod "328960d4-cdf9-4134-b966-af48db38c682" (UID: "328960d4-cdf9-4134-b966-af48db38c682"). InnerVolumeSpecName "kube-api-access-nzzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.096457 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328960d4-cdf9-4134-b966-af48db38c682" (UID: "328960d4-cdf9-4134-b966-af48db38c682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.108409 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzxp\" (UniqueName: \"kubernetes.io/projected/328960d4-cdf9-4134-b966-af48db38c682-kube-api-access-nzzxp\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.108444 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.108455 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328960d4-cdf9-4134-b966-af48db38c682-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.293366 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jv98" event={"ID":"328960d4-cdf9-4134-b966-af48db38c682","Type":"ContainerDied","Data":"3e4db97d3dab806bbea89c7b116ea6482682a310a5e96ab17c7a34a929e79d26"} Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.293468 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jv98" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.295601 4751 scope.go:117] "RemoveContainer" containerID="17f2dbf626097fd957f4861f087689c5c713092f2977ce4cc3de8f3499d1d956" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.336572 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.350421 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jv98"] Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.362729 4751 scope.go:117] "RemoveContainer" containerID="84a7032a8ce506245c02b101a1f3d62fe28f089900b8b08794dd653c2af3888d" Jan 30 22:35:08 crc kubenswrapper[4751]: I0130 22:35:08.410269 4751 scope.go:117] "RemoveContainer" containerID="b7d035e0c8a1729f7bd553455a75aa5d582ba92d231e9e7e3235b4066f573ef7" Jan 30 22:35:09 crc kubenswrapper[4751]: I0130 22:35:09.995501 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328960d4-cdf9-4134-b966-af48db38c682" path="/var/lib/kubelet/pods/328960d4-cdf9-4134-b966-af48db38c682/volumes" Jan 30 22:35:10 crc kubenswrapper[4751]: I0130 22:35:10.770173 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:10 crc kubenswrapper[4751]: I0130 22:35:10.872565 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:11 crc kubenswrapper[4751]: I0130 22:35:11.168523 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:35:12 crc kubenswrapper[4751]: I0130 22:35:12.331421 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mmdjh" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" containerID="cri-o://4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" gracePeriod=2 Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.170788 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.238534 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") pod \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.238815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") pod \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.238931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") pod \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\" (UID: \"ad7b70f7-4a24-4ecf-825b-29383cc2b01e\") " Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.242809 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities" (OuterVolumeSpecName: "utilities") pod "ad7b70f7-4a24-4ecf-825b-29383cc2b01e" (UID: "ad7b70f7-4a24-4ecf-825b-29383cc2b01e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.264814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j" (OuterVolumeSpecName: "kube-api-access-7qz2j") pod "ad7b70f7-4a24-4ecf-825b-29383cc2b01e" (UID: "ad7b70f7-4a24-4ecf-825b-29383cc2b01e"). InnerVolumeSpecName "kube-api-access-7qz2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343124 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" exitCode=0 Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45"} Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343212 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmdjh" event={"ID":"ad7b70f7-4a24-4ecf-825b-29383cc2b01e","Type":"ContainerDied","Data":"f56f6afdb1c34b3fd6832f87715462fd3fb2f665ac0ee6d4689e6af74b5ac7ce"} Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343234 4751 scope.go:117] "RemoveContainer" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343255 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343300 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qz2j\" (UniqueName: \"kubernetes.io/projected/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-kube-api-access-7qz2j\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.343371 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmdjh" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.374363 4751 scope.go:117] "RemoveContainer" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.394070 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7b70f7-4a24-4ecf-825b-29383cc2b01e" (UID: "ad7b70f7-4a24-4ecf-825b-29383cc2b01e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.406211 4751 scope.go:117] "RemoveContainer" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.445862 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7b70f7-4a24-4ecf-825b-29383cc2b01e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.465500 4751 scope.go:117] "RemoveContainer" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" Jan 30 22:35:13 crc kubenswrapper[4751]: E0130 22:35:13.470459 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45\": container with ID starting with 4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45 not found: ID does not exist" containerID="4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.470628 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45"} err="failed to get container status \"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45\": rpc error: code = NotFound desc = could not find container \"4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45\": container with ID starting with 4be112339abc0ac5a431288d01257e3e3f98ce652cfb9dc78b3e90a13c956b45 not found: ID does not exist" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.470712 4751 scope.go:117] "RemoveContainer" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" Jan 30 22:35:13 crc kubenswrapper[4751]: E0130 22:35:13.471715 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12\": container with ID starting with b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12 not found: ID does not exist" containerID="b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.471754 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12"} err="failed to get container status \"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12\": rpc error: code = NotFound desc = could not find container \"b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12\": container with ID starting with b06670816b157fe9b19611691d748c7b693824ae882d3b3e3db74ddbf02c8d12 not found: ID does not exist" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.471778 4751 scope.go:117] "RemoveContainer" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" Jan 30 22:35:13 crc kubenswrapper[4751]: E0130 22:35:13.472213 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be\": container with ID starting with f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be not found: ID does not exist" containerID="f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.472433 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be"} err="failed to get container status \"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be\": rpc error: code = NotFound desc = could not find container \"f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be\": container with ID starting with f17170448b693703ee174a9d749a77d124f9d6b30e425ad03edc67f93a29f5be not found: ID does not exist" Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.687732 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.708154 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mmdjh"] Jan 30 22:35:13 crc kubenswrapper[4751]: I0130 22:35:13.989053 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" path="/var/lib/kubelet/pods/ad7b70f7-4a24-4ecf-825b-29383cc2b01e/volumes" Jan 30 22:35:54 crc kubenswrapper[4751]: I0130 22:35:54.128569 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:35:54 crc kubenswrapper[4751]: I0130 22:35:54.130290 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:36:24 crc kubenswrapper[4751]: I0130 22:36:24.126853 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:36:24 crc kubenswrapper[4751]: I0130 22:36:24.127603 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.126446 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.127052 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.127108 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.128129 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.128312 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" gracePeriod=600 Jan 30 22:36:54 crc kubenswrapper[4751]: E0130 22:36:54.252006 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.383512 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" exitCode=0 Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.383572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd"} Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.383627 4751 scope.go:117] "RemoveContainer" containerID="99d2a0709014fe5031de012673bab8841bfdcebadd3c614ac1c6d9e193438395" Jan 30 22:36:54 crc kubenswrapper[4751]: I0130 22:36:54.384165 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:36:54 crc kubenswrapper[4751]: E0130 22:36:54.384577 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:05 crc kubenswrapper[4751]: I0130 22:37:05.976499 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:05 crc kubenswrapper[4751]: E0130 22:37:05.977303 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:16 crc kubenswrapper[4751]: I0130 22:37:16.976033 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:16 crc kubenswrapper[4751]: E0130 22:37:16.976797 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:28 crc kubenswrapper[4751]: I0130 22:37:28.975993 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:28 crc kubenswrapper[4751]: E0130 22:37:28.977260 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:40 crc kubenswrapper[4751]: I0130 22:37:40.977842 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:40 crc kubenswrapper[4751]: E0130 22:37:40.978647 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:37:54 crc kubenswrapper[4751]: I0130 22:37:54.976168 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:37:54 crc kubenswrapper[4751]: E0130 22:37:54.976994 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:07 crc kubenswrapper[4751]: I0130 22:38:07.976338 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:07 crc kubenswrapper[4751]: E0130 22:38:07.977483 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:19 crc kubenswrapper[4751]: I0130 22:38:19.976131 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:19 crc kubenswrapper[4751]: E0130 22:38:19.976923 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:33 crc kubenswrapper[4751]: I0130 22:38:33.976026 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:33 crc kubenswrapper[4751]: E0130 22:38:33.976866 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:45 crc kubenswrapper[4751]: I0130 22:38:45.977441 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:45 crc kubenswrapper[4751]: E0130 22:38:45.978937 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:38:58 crc kubenswrapper[4751]: I0130 22:38:58.975761 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:38:58 crc kubenswrapper[4751]: E0130 22:38:58.976550 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:12 crc kubenswrapper[4751]: I0130 22:39:12.976109 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:12 crc kubenswrapper[4751]: E0130 22:39:12.976881 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:23 crc kubenswrapper[4751]: I0130 22:39:23.975809 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:23 crc kubenswrapper[4751]: E0130 22:39:23.976764 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:38 crc kubenswrapper[4751]: I0130 22:39:38.976804 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:38 crc kubenswrapper[4751]: E0130 22:39:38.977864 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:39:53 crc kubenswrapper[4751]: I0130 22:39:53.977047 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:39:53 crc kubenswrapper[4751]: E0130 22:39:53.977762 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:08 crc kubenswrapper[4751]: I0130 22:40:08.976542 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:08 crc kubenswrapper[4751]: E0130 22:40:08.978011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:23 crc kubenswrapper[4751]: I0130 22:40:23.976508 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:23 crc kubenswrapper[4751]: E0130 22:40:23.977355 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:35 crc kubenswrapper[4751]: I0130 22:40:35.978287 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:35 crc kubenswrapper[4751]: E0130 22:40:35.979355 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.945106 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.950982 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951082 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951122 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951131 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-content" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951152 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951160 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951172 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951180 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="extract-utilities" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951198 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951215 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: E0130 22:40:43.951242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.951252 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.952755 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7b70f7-4a24-4ecf-825b-29383cc2b01e" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.952793 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="328960d4-cdf9-4134-b966-af48db38c682" containerName="registry-server" Jan 30 22:40:43 crc kubenswrapper[4751]: I0130 22:40:43.959289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.024555 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.079717 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.080035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.080201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.184917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.185282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.185545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.187694 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.187868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.218122 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"community-operators-mx86x\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:44 crc kubenswrapper[4751]: I0130 22:40:44.292369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:45 crc kubenswrapper[4751]: I0130 22:40:45.473047 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:40:45 crc kubenswrapper[4751]: I0130 22:40:45.892123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerStarted","Data":"0caa71c35533108c2603378372f1b0bcf3887c19290e714633a57af4fbebb8fe"} Jan 30 22:40:46 crc kubenswrapper[4751]: I0130 22:40:46.905932 4751 generic.go:334] "Generic (PLEG): container finished" podID="13f26c61-3909-4cab-9603-935ea3e141f7" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" exitCode=0 Jan 30 22:40:46 crc kubenswrapper[4751]: I0130 22:40:46.906226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4"} Jan 30 22:40:46 crc kubenswrapper[4751]: I0130 22:40:46.912339 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:40:48 crc kubenswrapper[4751]: I0130 22:40:48.947161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerStarted","Data":"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28"} Jan 30 22:40:48 crc kubenswrapper[4751]: I0130 22:40:48.977194 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:40:48 crc kubenswrapper[4751]: E0130 22:40:48.977943 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:40:51 crc kubenswrapper[4751]: I0130 22:40:51.003470 4751 generic.go:334] "Generic (PLEG): container finished" podID="13f26c61-3909-4cab-9603-935ea3e141f7" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" exitCode=0 Jan 30 22:40:51 crc kubenswrapper[4751]: I0130 22:40:51.003553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28"} Jan 30 22:40:52 crc kubenswrapper[4751]: I0130 22:40:52.018790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerStarted","Data":"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9"} Jan 30 22:40:52 crc kubenswrapper[4751]: I0130 22:40:52.055342 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mx86x" podStartSLOduration=4.472165214 podStartE2EDuration="9.055289436s" podCreationTimestamp="2026-01-30 22:40:43 +0000 UTC" firstStartedPulling="2026-01-30 22:40:46.908230347 +0000 UTC m=+5185.654052996" lastFinishedPulling="2026-01-30 22:40:51.491354569 +0000 UTC m=+5190.237177218" observedRunningTime="2026-01-30 22:40:52.043961764 +0000 UTC m=+5190.789784413" watchObservedRunningTime="2026-01-30 22:40:52.055289436 +0000 UTC m=+5190.801112085" Jan 30 22:40:54 crc kubenswrapper[4751]: I0130 22:40:54.293260 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:54 crc kubenswrapper[4751]: I0130 22:40:54.294484 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:40:55 crc kubenswrapper[4751]: I0130 22:40:55.346571 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mx86x" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" probeResult="failure" output=< Jan 30 22:40:55 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:40:55 crc kubenswrapper[4751]: > Jan 30 22:41:00 crc kubenswrapper[4751]: I0130 22:41:00.976753 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:00 crc kubenswrapper[4751]: E0130 22:41:00.977674 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:05 crc kubenswrapper[4751]: I0130 22:41:05.353260 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mx86x" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" probeResult="failure" output=< Jan 30 22:41:05 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:41:05 crc kubenswrapper[4751]: > Jan 30 22:41:12 crc kubenswrapper[4751]: I0130 22:41:12.975920 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:12 crc kubenswrapper[4751]: E0130 22:41:12.976721 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:14 crc kubenswrapper[4751]: I0130 22:41:14.341016 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:14 crc kubenswrapper[4751]: I0130 22:41:14.393913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:15 crc kubenswrapper[4751]: I0130 22:41:15.130798 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:41:16 crc kubenswrapper[4751]: I0130 22:41:16.283957 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mx86x" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" containerID="cri-o://fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" gracePeriod=2 Jan 30 22:41:16 crc kubenswrapper[4751]: I0130 22:41:16.997863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.084915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.085010 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.085113 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.089992 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities" (OuterVolumeSpecName: "utilities") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.102778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6" (OuterVolumeSpecName: "kube-api-access-pzzg6") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "kube-api-access-pzzg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.188143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.188703 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") pod \"13f26c61-3909-4cab-9603-935ea3e141f7\" (UID: \"13f26c61-3909-4cab-9603-935ea3e141f7\") " Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.189474 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzzg6\" (UniqueName: \"kubernetes.io/projected/13f26c61-3909-4cab-9603-935ea3e141f7-kube-api-access-pzzg6\") on node \"crc\" DevicePath \"\"" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.189491 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:41:17 crc kubenswrapper[4751]: W0130 22:41:17.190667 4751 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/13f26c61-3909-4cab-9603-935ea3e141f7/volumes/kubernetes.io~empty-dir/catalog-content Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.190694 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13f26c61-3909-4cab-9603-935ea3e141f7" (UID: "13f26c61-3909-4cab-9603-935ea3e141f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.303277 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f26c61-3909-4cab-9603-935ea3e141f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307894 4751 generic.go:334] "Generic (PLEG): container finished" podID="13f26c61-3909-4cab-9603-935ea3e141f7" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" exitCode=0 Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9"} Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307965 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx86x" event={"ID":"13f26c61-3909-4cab-9603-935ea3e141f7","Type":"ContainerDied","Data":"0caa71c35533108c2603378372f1b0bcf3887c19290e714633a57af4fbebb8fe"} Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.307982 4751 scope.go:117] "RemoveContainer" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.309021 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx86x" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.351295 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.362916 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mx86x"] Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.365049 4751 scope.go:117] "RemoveContainer" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.392572 4751 scope.go:117] "RemoveContainer" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.456943 4751 scope.go:117] "RemoveContainer" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" Jan 30 22:41:17 crc kubenswrapper[4751]: E0130 22:41:17.460247 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9\": container with ID starting with fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9 not found: ID does not exist" containerID="fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.460302 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9"} err="failed to get container status \"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9\": rpc error: code = NotFound desc = could not find container \"fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9\": container with ID starting with fc93eee53f89db0e5ec9b07864cd511e729e869c7c9586164932a52db692b5c9 not found: ID does not exist" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.460370 4751 scope.go:117] "RemoveContainer" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" Jan 30 22:41:17 crc kubenswrapper[4751]: E0130 22:41:17.460964 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28\": container with ID starting with 4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28 not found: ID does not exist" containerID="4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.461121 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28"} err="failed to get container status \"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28\": rpc error: code = NotFound desc = could not find container \"4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28\": container with ID starting with 4ef1e2e151d7589c123a15b0534af79255baa7278a8d12ef82e5c289b2792e28 not found: ID does not exist" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.461354 4751 scope.go:117] "RemoveContainer" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" Jan 30 22:41:17 crc kubenswrapper[4751]: E0130 22:41:17.461804 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4\": container with ID starting with f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4 not found: ID does not exist" containerID="f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.461829 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4"} err="failed to get container status \"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4\": rpc error: code = NotFound desc = could not find container \"f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4\": container with ID starting with f5155d06487934324c511dfefc7974b9968f8e07f2f9010ae8f2977a0d8743e4 not found: ID does not exist" Jan 30 22:41:17 crc kubenswrapper[4751]: I0130 22:41:17.992490 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" path="/var/lib/kubelet/pods/13f26c61-3909-4cab-9603-935ea3e141f7/volumes" Jan 30 22:41:25 crc kubenswrapper[4751]: I0130 22:41:25.978785 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:25 crc kubenswrapper[4751]: E0130 22:41:25.980053 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:36 crc kubenswrapper[4751]: I0130 22:41:36.975920 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:36 crc kubenswrapper[4751]: E0130 22:41:36.976710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:41:47 crc kubenswrapper[4751]: I0130 22:41:47.976816 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:41:47 crc kubenswrapper[4751]: E0130 22:41:47.977597 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:42:02 crc kubenswrapper[4751]: I0130 22:42:02.976632 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:42:03 crc kubenswrapper[4751]: I0130 22:42:03.836510 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33"} Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.850783 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:43:52 crc kubenswrapper[4751]: E0130 22:43:52.851728 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.851741 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" Jan 30 22:43:52 crc kubenswrapper[4751]: E0130 22:43:52.851773 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-content" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.851781 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-content" Jan 30 22:43:52 crc kubenswrapper[4751]: E0130 22:43:52.851798 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-utilities" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.851806 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="extract-utilities" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.852007 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f26c61-3909-4cab-9603-935ea3e141f7" containerName="registry-server" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.853700 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.881426 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.938112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.938168 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:52 crc kubenswrapper[4751]: I0130 22:43:52.938490 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041221 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.041828 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.042262 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.063461 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"redhat-operators-cx2g4\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.190746 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:43:53 crc kubenswrapper[4751]: I0130 22:43:53.733349 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:43:54 crc kubenswrapper[4751]: I0130 22:43:54.055787 4751 generic.go:334] "Generic (PLEG): container finished" podID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" exitCode=0 Jan 30 22:43:54 crc kubenswrapper[4751]: I0130 22:43:54.056109 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986"} Jan 30 22:43:54 crc kubenswrapper[4751]: I0130 22:43:54.056143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerStarted","Data":"45b2720c7ea825893558acc60658f948a51ad8ce272f3d92c31ad58ff23c7742"} Jan 30 22:43:56 crc kubenswrapper[4751]: I0130 22:43:56.077184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerStarted","Data":"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6"} Jan 30 22:44:01 crc kubenswrapper[4751]: I0130 22:44:01.132448 4751 generic.go:334] "Generic (PLEG): container finished" podID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" exitCode=0 Jan 30 22:44:01 crc kubenswrapper[4751]: I0130 22:44:01.132550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6"} Jan 30 22:44:02 crc kubenswrapper[4751]: I0130 22:44:02.144646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerStarted","Data":"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a"} Jan 30 22:44:02 crc kubenswrapper[4751]: I0130 22:44:02.162321 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cx2g4" podStartSLOduration=2.678330289 podStartE2EDuration="10.162301843s" podCreationTimestamp="2026-01-30 22:43:52 +0000 UTC" firstStartedPulling="2026-01-30 22:43:54.058621694 +0000 UTC m=+5372.804444343" lastFinishedPulling="2026-01-30 22:44:01.542593248 +0000 UTC m=+5380.288415897" observedRunningTime="2026-01-30 22:44:02.159580478 +0000 UTC m=+5380.905403147" watchObservedRunningTime="2026-01-30 22:44:02.162301843 +0000 UTC m=+5380.908124512" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.190896 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.191320 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.396341 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.411980 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.413051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.515255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.515404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.515501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617446 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.617844 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.618517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.645402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"certified-operators-k24ns\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:03 crc kubenswrapper[4751]: I0130 22:44:03.746444 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:04 crc kubenswrapper[4751]: I0130 22:44:04.250365 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" probeResult="failure" output=< Jan 30 22:44:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:44:04 crc kubenswrapper[4751]: > Jan 30 22:44:04 crc kubenswrapper[4751]: I0130 22:44:04.615809 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:05 crc kubenswrapper[4751]: I0130 22:44:05.176540 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" exitCode=0 Jan 30 22:44:05 crc kubenswrapper[4751]: I0130 22:44:05.176655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57"} Jan 30 22:44:05 crc kubenswrapper[4751]: I0130 22:44:05.176816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerStarted","Data":"929849532c0253cc3db083fc7dd3f3e972b4f182a69073c5905ab6c91d23ae1d"} Jan 30 22:44:06 crc kubenswrapper[4751]: I0130 22:44:06.191509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerStarted","Data":"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3"} Jan 30 22:44:08 crc kubenswrapper[4751]: I0130 22:44:08.212553 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" exitCode=0 Jan 30 22:44:08 crc kubenswrapper[4751]: I0130 22:44:08.212618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3"} Jan 30 22:44:10 crc kubenswrapper[4751]: I0130 22:44:10.239443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerStarted","Data":"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e"} Jan 30 22:44:10 crc kubenswrapper[4751]: I0130 22:44:10.268873 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k24ns" podStartSLOduration=3.8257431669999997 podStartE2EDuration="7.26885052s" podCreationTimestamp="2026-01-30 22:44:03 +0000 UTC" firstStartedPulling="2026-01-30 22:44:05.178189205 +0000 UTC m=+5383.924011854" lastFinishedPulling="2026-01-30 22:44:08.621296558 +0000 UTC m=+5387.367119207" observedRunningTime="2026-01-30 22:44:10.260526842 +0000 UTC m=+5389.006349511" watchObservedRunningTime="2026-01-30 22:44:10.26885052 +0000 UTC m=+5389.014673169" Jan 30 22:44:13 crc kubenswrapper[4751]: I0130 22:44:13.747236 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:13 crc kubenswrapper[4751]: I0130 22:44:13.747895 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:13 crc kubenswrapper[4751]: I0130 22:44:13.808442 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:14 crc kubenswrapper[4751]: I0130 22:44:14.244217 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" probeResult="failure" output=< Jan 30 22:44:14 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:44:14 crc kubenswrapper[4751]: > Jan 30 22:44:14 crc kubenswrapper[4751]: I0130 22:44:14.333944 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:14 crc kubenswrapper[4751]: I0130 22:44:14.395832 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:16 crc kubenswrapper[4751]: I0130 22:44:16.303778 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k24ns" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" containerID="cri-o://ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" gracePeriod=2 Jan 30 22:44:16 crc kubenswrapper[4751]: I0130 22:44:16.906106 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.084599 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") pod \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.084754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") pod \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.084827 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") pod \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\" (UID: \"d5dd7bf4-a432-4493-ba75-3332bd1796e2\") " Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.085406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities" (OuterVolumeSpecName: "utilities") pod "d5dd7bf4-a432-4493-ba75-3332bd1796e2" (UID: "d5dd7bf4-a432-4493-ba75-3332bd1796e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.085873 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.093451 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k" (OuterVolumeSpecName: "kube-api-access-pqz2k") pod "d5dd7bf4-a432-4493-ba75-3332bd1796e2" (UID: "d5dd7bf4-a432-4493-ba75-3332bd1796e2"). InnerVolumeSpecName "kube-api-access-pqz2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.143582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5dd7bf4-a432-4493-ba75-3332bd1796e2" (UID: "d5dd7bf4-a432-4493-ba75-3332bd1796e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.188486 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqz2k\" (UniqueName: \"kubernetes.io/projected/d5dd7bf4-a432-4493-ba75-3332bd1796e2-kube-api-access-pqz2k\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.188521 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5dd7bf4-a432-4493-ba75-3332bd1796e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318348 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" exitCode=0 Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e"} Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24ns" event={"ID":"d5dd7bf4-a432-4493-ba75-3332bd1796e2","Type":"ContainerDied","Data":"929849532c0253cc3db083fc7dd3f3e972b4f182a69073c5905ab6c91d23ae1d"} Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318441 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24ns" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.318488 4751 scope.go:117] "RemoveContainer" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.349019 4751 scope.go:117] "RemoveContainer" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.361996 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.380317 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k24ns"] Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.386236 4751 scope.go:117] "RemoveContainer" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.450213 4751 scope.go:117] "RemoveContainer" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" Jan 30 22:44:17 crc kubenswrapper[4751]: E0130 22:44:17.451805 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e\": container with ID starting with ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e not found: ID does not exist" containerID="ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.451843 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e"} err="failed to get container status \"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e\": rpc error: code = NotFound desc = could not find container \"ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e\": container with ID starting with ac87700bb1d5a0e3a99e322a1991bd81a9e300173fce5110526503c94867d56e not found: ID does not exist" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.451873 4751 scope.go:117] "RemoveContainer" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" Jan 30 22:44:17 crc kubenswrapper[4751]: E0130 22:44:17.452400 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3\": container with ID starting with e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3 not found: ID does not exist" containerID="e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.452436 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3"} err="failed to get container status \"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3\": rpc error: code = NotFound desc = could not find container \"e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3\": container with ID starting with e5019237efb28f646ab6ebb27da879fbb15e3823cf21ffcb3f73b12b64e05ff3 not found: ID does not exist" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.452459 4751 scope.go:117] "RemoveContainer" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" Jan 30 22:44:17 crc kubenswrapper[4751]: E0130 22:44:17.452765 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57\": container with ID starting with 494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57 not found: ID does not exist" containerID="494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57" Jan 30 22:44:17 crc kubenswrapper[4751]: I0130 22:44:17.452806 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57"} err="failed to get container status \"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57\": rpc error: code = NotFound desc = could not find container \"494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57\": container with ID starting with 494eb7d5f693312e5229d81ca50f77f8fa6ea43ea3a58094040b4842adc95b57 not found: ID does not exist" Jan 30 22:44:18 crc kubenswrapper[4751]: I0130 22:44:18.002844 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" path="/var/lib/kubelet/pods/d5dd7bf4-a432-4493-ba75-3332bd1796e2/volumes" Jan 30 22:44:24 crc kubenswrapper[4751]: I0130 22:44:24.126421 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:44:24 crc kubenswrapper[4751]: I0130 22:44:24.127022 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:44:24 crc kubenswrapper[4751]: I0130 22:44:24.241254 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" probeResult="failure" output=< Jan 30 22:44:24 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:44:24 crc kubenswrapper[4751]: > Jan 30 22:44:33 crc kubenswrapper[4751]: I0130 22:44:33.243119 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:33 crc kubenswrapper[4751]: I0130 22:44:33.302022 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:34 crc kubenswrapper[4751]: I0130 22:44:34.227990 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:44:34 crc kubenswrapper[4751]: I0130 22:44:34.504938 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cx2g4" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" containerID="cri-o://38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" gracePeriod=2 Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.177744 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.321955 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") pod \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.322148 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") pod \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.322173 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") pod \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\" (UID: \"d95f56e2-6bf3-45be-8cb9-bdfe109d5305\") " Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.323208 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities" (OuterVolumeSpecName: "utilities") pod "d95f56e2-6bf3-45be-8cb9-bdfe109d5305" (UID: "d95f56e2-6bf3-45be-8cb9-bdfe109d5305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.327650 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz" (OuterVolumeSpecName: "kube-api-access-jbtmz") pod "d95f56e2-6bf3-45be-8cb9-bdfe109d5305" (UID: "d95f56e2-6bf3-45be-8cb9-bdfe109d5305"). InnerVolumeSpecName "kube-api-access-jbtmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.426115 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.426171 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbtmz\" (UniqueName: \"kubernetes.io/projected/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-kube-api-access-jbtmz\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.462408 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d95f56e2-6bf3-45be-8cb9-bdfe109d5305" (UID: "d95f56e2-6bf3-45be-8cb9-bdfe109d5305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518486 4751 generic.go:334] "Generic (PLEG): container finished" podID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" exitCode=0 Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a"} Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cx2g4" event={"ID":"d95f56e2-6bf3-45be-8cb9-bdfe109d5305","Type":"ContainerDied","Data":"45b2720c7ea825893558acc60658f948a51ad8ce272f3d92c31ad58ff23c7742"} Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518601 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cx2g4" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.518681 4751 scope.go:117] "RemoveContainer" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.529252 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95f56e2-6bf3-45be-8cb9-bdfe109d5305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.549755 4751 scope.go:117] "RemoveContainer" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.562220 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.573050 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cx2g4"] Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.585001 4751 scope.go:117] "RemoveContainer" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.630176 4751 scope.go:117] "RemoveContainer" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" Jan 30 22:44:35 crc kubenswrapper[4751]: E0130 22:44:35.630935 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a\": container with ID starting with 38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a not found: ID does not exist" containerID="38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.630964 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a"} err="failed to get container status \"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a\": rpc error: code = NotFound desc = could not find container \"38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a\": container with ID starting with 38bf1dcc85c14c60748f5f383be192283086b1ecc8e6c1e69b860eac488c964a not found: ID does not exist" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.630986 4751 scope.go:117] "RemoveContainer" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" Jan 30 22:44:35 crc kubenswrapper[4751]: E0130 22:44:35.631444 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6\": container with ID starting with 78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6 not found: ID does not exist" containerID="78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.631464 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6"} err="failed to get container status \"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6\": rpc error: code = NotFound desc = could not find container \"78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6\": container with ID starting with 78509c29b223ad50134d1dd5372db8ab88fbdadbd09a3a1b2556e2727abf2ea6 not found: ID does not exist" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.631476 4751 scope.go:117] "RemoveContainer" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" Jan 30 22:44:35 crc kubenswrapper[4751]: E0130 22:44:35.631716 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986\": container with ID starting with 373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986 not found: ID does not exist" containerID="373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.631746 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986"} err="failed to get container status \"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986\": rpc error: code = NotFound desc = could not find container \"373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986\": container with ID starting with 373144bf5c58c4afd5940b51711edc9b1c3ea33170109e30e8dc7ff0d4b37986 not found: ID does not exist" Jan 30 22:44:35 crc kubenswrapper[4751]: I0130 22:44:35.994740 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" path="/var/lib/kubelet/pods/d95f56e2-6bf3-45be-8cb9-bdfe109d5305/volumes" Jan 30 22:44:54 crc kubenswrapper[4751]: I0130 22:44:54.127140 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:44:54 crc kubenswrapper[4751]: I0130 22:44:54.127630 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.226645 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln"] Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227747 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227769 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227791 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227802 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227823 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227831 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227880 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227888 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227906 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227914 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="extract-utilities" Jan 30 22:45:00 crc kubenswrapper[4751]: E0130 22:45:00.227932 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.227940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="extract-content" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.228203 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5dd7bf4-a432-4493-ba75-3332bd1796e2" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.228228 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95f56e2-6bf3-45be-8cb9-bdfe109d5305" containerName="registry-server" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.229182 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.241660 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln"] Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.253112 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.261133 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.421998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.422403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.422639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.525463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.525683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.525878 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.526765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.533171 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.544132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"collect-profiles-29496885-n9nln\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:00 crc kubenswrapper[4751]: I0130 22:45:00.568857 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:01 crc kubenswrapper[4751]: I0130 22:45:01.042170 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln"] Jan 30 22:45:01 crc kubenswrapper[4751]: I0130 22:45:01.881760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerStarted","Data":"a296ff244120e02ca48530e5d9a31814d1ffbf86deffc46137711742d0fb0eb3"} Jan 30 22:45:01 crc kubenswrapper[4751]: I0130 22:45:01.882176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerStarted","Data":"3aa379813f0dbc3dd5fc918bd713dab3b7c37eab60aede7f4f9c1cdaf906812c"} Jan 30 22:45:02 crc kubenswrapper[4751]: I0130 22:45:02.894837 4751 generic.go:334] "Generic (PLEG): container finished" podID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerID="a296ff244120e02ca48530e5d9a31814d1ffbf86deffc46137711742d0fb0eb3" exitCode=0 Jan 30 22:45:02 crc kubenswrapper[4751]: I0130 22:45:02.895053 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerDied","Data":"a296ff244120e02ca48530e5d9a31814d1ffbf86deffc46137711742d0fb0eb3"} Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.320233 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.415981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") pod \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.416051 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") pod \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.416112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") pod \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\" (UID: \"d82ca308-99f6-4e91-969e-fa3eb429b8fc\") " Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.416754 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d82ca308-99f6-4e91-969e-fa3eb429b8fc" (UID: "d82ca308-99f6-4e91-969e-fa3eb429b8fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.417435 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ca308-99f6-4e91-969e-fa3eb429b8fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.421541 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d82ca308-99f6-4e91-969e-fa3eb429b8fc" (UID: "d82ca308-99f6-4e91-969e-fa3eb429b8fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.422684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb" (OuterVolumeSpecName: "kube-api-access-mbklb") pod "d82ca308-99f6-4e91-969e-fa3eb429b8fc" (UID: "d82ca308-99f6-4e91-969e-fa3eb429b8fc"). InnerVolumeSpecName "kube-api-access-mbklb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.518394 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbklb\" (UniqueName: \"kubernetes.io/projected/d82ca308-99f6-4e91-969e-fa3eb429b8fc-kube-api-access-mbklb\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.518426 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d82ca308-99f6-4e91-969e-fa3eb429b8fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.918929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" event={"ID":"d82ca308-99f6-4e91-969e-fa3eb429b8fc","Type":"ContainerDied","Data":"3aa379813f0dbc3dd5fc918bd713dab3b7c37eab60aede7f4f9c1cdaf906812c"} Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.918976 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa379813f0dbc3dd5fc918bd713dab3b7c37eab60aede7f4f9c1cdaf906812c" Jan 30 22:45:04 crc kubenswrapper[4751]: I0130 22:45:04.918958 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496885-n9nln" Jan 30 22:45:05 crc kubenswrapper[4751]: I0130 22:45:05.402949 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:45:05 crc kubenswrapper[4751]: I0130 22:45:05.415282 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496840-cv6z6"] Jan 30 22:45:05 crc kubenswrapper[4751]: I0130 22:45:05.991405 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fd180d-a717-4e4f-92fc-e8e77f2d303c" path="/var/lib/kubelet/pods/87fd180d-a717-4e4f-92fc-e8e77f2d303c/volumes" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.126677 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.127238 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.127284 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.128283 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:45:24 crc kubenswrapper[4751]: I0130 22:45:24.128366 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33" gracePeriod=600 Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.164544 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33" exitCode=0 Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.164643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33"} Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.165063 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232"} Jan 30 22:45:25 crc kubenswrapper[4751]: I0130 22:45:25.165087 4751 scope.go:117] "RemoveContainer" containerID="50bbd2dc547fd2f2b4813d5363ae69e948fb5cd3ae86485f5bf3345f007ef4dd" Jan 30 22:45:29 crc kubenswrapper[4751]: I0130 22:45:29.744310 4751 scope.go:117] "RemoveContainer" containerID="8cb214ecc973d14bc0906a66a17ca4c95d3c39c0cada1250d2a736afa76d1aeb" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.943805 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:45:49 crc kubenswrapper[4751]: E0130 22:45:49.945364 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerName="collect-profiles" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.945390 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerName="collect-profiles" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.945853 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82ca308-99f6-4e91-969e-fa3eb429b8fc" containerName="collect-profiles" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.948776 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:49 crc kubenswrapper[4751]: I0130 22:45:49.959186 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.079801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.079928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.080036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.183986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.184611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.185263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.185315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.185446 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.207818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"redhat-marketplace-kzzg4\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.283833 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:45:50 crc kubenswrapper[4751]: I0130 22:45:50.795741 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.449544 4751 generic.go:334] "Generic (PLEG): container finished" podID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" exitCode=0 Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.449645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a"} Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.449855 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerStarted","Data":"360768f03c69a0f21e401d2ca0fd8a4048177b362bdd5e4bc3ab30fcf847656e"} Jan 30 22:45:51 crc kubenswrapper[4751]: I0130 22:45:51.453375 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:45:52 crc kubenswrapper[4751]: I0130 22:45:52.474525 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerStarted","Data":"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3"} Jan 30 22:45:53 crc kubenswrapper[4751]: I0130 22:45:53.489129 4751 generic.go:334] "Generic (PLEG): container finished" podID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" exitCode=0 Jan 30 22:45:53 crc kubenswrapper[4751]: I0130 22:45:53.489229 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3"} Jan 30 22:45:55 crc kubenswrapper[4751]: I0130 22:45:55.513522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerStarted","Data":"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2"} Jan 30 22:45:55 crc kubenswrapper[4751]: I0130 22:45:55.544561 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzzg4" podStartSLOduration=4.102318896 podStartE2EDuration="6.544366437s" podCreationTimestamp="2026-01-30 22:45:49 +0000 UTC" firstStartedPulling="2026-01-30 22:45:51.452934312 +0000 UTC m=+5490.198756961" lastFinishedPulling="2026-01-30 22:45:53.894981863 +0000 UTC m=+5492.640804502" observedRunningTime="2026-01-30 22:45:55.532922421 +0000 UTC m=+5494.278745090" watchObservedRunningTime="2026-01-30 22:45:55.544366437 +0000 UTC m=+5494.290189096" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.284572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.285232 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.342053 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.619909 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:00 crc kubenswrapper[4751]: I0130 22:46:00.679191 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:46:02 crc kubenswrapper[4751]: I0130 22:46:02.582177 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzzg4" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" containerID="cri-o://ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" gracePeriod=2 Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.129160 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.217146 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") pod \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.217200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") pod \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.217264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") pod \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\" (UID: \"70ea4446-1a4d-41dc-a96c-ca1c271f80ff\") " Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.218007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities" (OuterVolumeSpecName: "utilities") pod "70ea4446-1a4d-41dc-a96c-ca1c271f80ff" (UID: "70ea4446-1a4d-41dc-a96c-ca1c271f80ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.218547 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.223584 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl" (OuterVolumeSpecName: "kube-api-access-hbfbl") pod "70ea4446-1a4d-41dc-a96c-ca1c271f80ff" (UID: "70ea4446-1a4d-41dc-a96c-ca1c271f80ff"). InnerVolumeSpecName "kube-api-access-hbfbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.250383 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70ea4446-1a4d-41dc-a96c-ca1c271f80ff" (UID: "70ea4446-1a4d-41dc-a96c-ca1c271f80ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.320829 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.320860 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfbl\" (UniqueName: \"kubernetes.io/projected/70ea4446-1a4d-41dc-a96c-ca1c271f80ff-kube-api-access-hbfbl\") on node \"crc\" DevicePath \"\"" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597520 4751 generic.go:334] "Generic (PLEG): container finished" podID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" exitCode=0 Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597561 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2"} Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzzg4" event={"ID":"70ea4446-1a4d-41dc-a96c-ca1c271f80ff","Type":"ContainerDied","Data":"360768f03c69a0f21e401d2ca0fd8a4048177b362bdd5e4bc3ab30fcf847656e"} Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597608 4751 scope.go:117] "RemoveContainer" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.597626 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzzg4" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.636484 4751 scope.go:117] "RemoveContainer" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.654419 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.676348 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzzg4"] Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.686534 4751 scope.go:117] "RemoveContainer" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.722369 4751 scope.go:117] "RemoveContainer" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" Jan 30 22:46:03 crc kubenswrapper[4751]: E0130 22:46:03.722933 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2\": container with ID starting with ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2 not found: ID does not exist" containerID="ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.722997 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2"} err="failed to get container status \"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2\": rpc error: code = NotFound desc = could not find container \"ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2\": container with ID starting with ea733e7aff846f1027bf3d4f1bede31ff58b392282c7cbc0cbe3c1d0198724f2 not found: ID does not exist" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723026 4751 scope.go:117] "RemoveContainer" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" Jan 30 22:46:03 crc kubenswrapper[4751]: E0130 22:46:03.723279 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3\": container with ID starting with e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3 not found: ID does not exist" containerID="e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723311 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3"} err="failed to get container status \"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3\": rpc error: code = NotFound desc = could not find container \"e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3\": container with ID starting with e58afeef5056b0764ab3ea37f8a543eabf134eeb4ffad8b501db243993968da3 not found: ID does not exist" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723338 4751 scope.go:117] "RemoveContainer" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" Jan 30 22:46:03 crc kubenswrapper[4751]: E0130 22:46:03.723572 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a\": container with ID starting with a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a not found: ID does not exist" containerID="a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.723591 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a"} err="failed to get container status \"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a\": rpc error: code = NotFound desc = could not find container \"a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a\": container with ID starting with a24d8e84afeae39a112c98dadb4292120237f9c5a8d0947f3c1a6831ff14225a not found: ID does not exist" Jan 30 22:46:03 crc kubenswrapper[4751]: I0130 22:46:03.991517 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" path="/var/lib/kubelet/pods/70ea4446-1a4d-41dc-a96c-ca1c271f80ff/volumes" Jan 30 22:47:24 crc kubenswrapper[4751]: I0130 22:47:24.127536 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:47:24 crc kubenswrapper[4751]: I0130 22:47:24.128148 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:47:54 crc kubenswrapper[4751]: I0130 22:47:54.126924 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:47:54 crc kubenswrapper[4751]: I0130 22:47:54.127606 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.126843 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.128010 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.128121 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.129964 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:48:24 crc kubenswrapper[4751]: I0130 22:48:24.130061 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" gracePeriod=600 Jan 30 22:48:24 crc kubenswrapper[4751]: E0130 22:48:24.260291 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.245597 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" exitCode=0 Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.245704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232"} Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.246045 4751 scope.go:117] "RemoveContainer" containerID="f46db10d2da49faae0086076a1a33f5d3a22e7c6010009d90d2a34188dcd0e33" Jan 30 22:48:25 crc kubenswrapper[4751]: I0130 22:48:25.247641 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:48:25 crc kubenswrapper[4751]: E0130 22:48:25.248445 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:48:37 crc kubenswrapper[4751]: I0130 22:48:37.975615 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:48:37 crc kubenswrapper[4751]: E0130 22:48:37.976405 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:48:51 crc kubenswrapper[4751]: I0130 22:48:51.986792 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:48:51 crc kubenswrapper[4751]: E0130 22:48:51.987462 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:02 crc kubenswrapper[4751]: I0130 22:49:02.976289 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:02 crc kubenswrapper[4751]: E0130 22:49:02.977299 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:15 crc kubenswrapper[4751]: I0130 22:49:15.977323 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:15 crc kubenswrapper[4751]: E0130 22:49:15.978455 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:29 crc kubenswrapper[4751]: I0130 22:49:29.976590 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:29 crc kubenswrapper[4751]: E0130 22:49:29.977239 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:43 crc kubenswrapper[4751]: I0130 22:49:43.287894 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:43 crc kubenswrapper[4751]: E0130 22:49:43.300933 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:49:57 crc kubenswrapper[4751]: I0130 22:49:57.975854 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:49:57 crc kubenswrapper[4751]: E0130 22:49:57.977932 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:11 crc kubenswrapper[4751]: I0130 22:50:11.985694 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:11 crc kubenswrapper[4751]: E0130 22:50:11.986515 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:25 crc kubenswrapper[4751]: I0130 22:50:25.976095 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:25 crc kubenswrapper[4751]: E0130 22:50:25.976979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:38 crc kubenswrapper[4751]: I0130 22:50:38.976752 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:38 crc kubenswrapper[4751]: E0130 22:50:38.977879 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:50:51 crc kubenswrapper[4751]: I0130 22:50:51.984221 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:50:51 crc kubenswrapper[4751]: E0130 22:50:51.985233 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:05 crc kubenswrapper[4751]: I0130 22:51:05.976468 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:05 crc kubenswrapper[4751]: E0130 22:51:05.977641 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:18 crc kubenswrapper[4751]: I0130 22:51:18.976297 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:18 crc kubenswrapper[4751]: E0130 22:51:18.977292 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.655712 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:20 crc kubenswrapper[4751]: E0130 22:51:20.656690 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656706 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" Jan 30 22:51:20 crc kubenswrapper[4751]: E0130 22:51:20.656724 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-content" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656730 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-content" Jan 30 22:51:20 crc kubenswrapper[4751]: E0130 22:51:20.656757 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-utilities" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656763 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="extract-utilities" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.656997 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ea4446-1a4d-41dc-a96c-ca1c271f80ff" containerName="registry-server" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.659439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.672064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.735670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.735961 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.736098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.839623 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.839962 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.840117 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.840836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.841282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.866593 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"community-operators-fdrt4\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:20 crc kubenswrapper[4751]: I0130 22:51:20.984574 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:21 crc kubenswrapper[4751]: I0130 22:51:21.482493 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:21 crc kubenswrapper[4751]: W0130 22:51:21.488818 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcac30a6_10b3_43ee_8e8e_8d2514b3f237.slice/crio-ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602 WatchSource:0}: Error finding container ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602: Status 404 returned error can't find the container with id ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602 Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.406415 4751 generic.go:334] "Generic (PLEG): container finished" podID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" exitCode=0 Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.406833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171"} Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.406891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerStarted","Data":"ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602"} Jan 30 22:51:22 crc kubenswrapper[4751]: I0130 22:51:22.409239 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:51:24 crc kubenswrapper[4751]: I0130 22:51:24.432251 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerStarted","Data":"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476"} Jan 30 22:51:25 crc kubenswrapper[4751]: I0130 22:51:25.446720 4751 generic.go:334] "Generic (PLEG): container finished" podID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" exitCode=0 Jan 30 22:51:25 crc kubenswrapper[4751]: I0130 22:51:25.446770 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476"} Jan 30 22:51:26 crc kubenswrapper[4751]: I0130 22:51:26.460045 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerStarted","Data":"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1"} Jan 30 22:51:26 crc kubenswrapper[4751]: I0130 22:51:26.481422 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fdrt4" podStartSLOduration=3.0513860250000002 podStartE2EDuration="6.481401127s" podCreationTimestamp="2026-01-30 22:51:20 +0000 UTC" firstStartedPulling="2026-01-30 22:51:22.408858192 +0000 UTC m=+5821.154680851" lastFinishedPulling="2026-01-30 22:51:25.838873294 +0000 UTC m=+5824.584695953" observedRunningTime="2026-01-30 22:51:26.478913648 +0000 UTC m=+5825.224736297" watchObservedRunningTime="2026-01-30 22:51:26.481401127 +0000 UTC m=+5825.227223776" Jan 30 22:51:29 crc kubenswrapper[4751]: I0130 22:51:29.977000 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:29 crc kubenswrapper[4751]: E0130 22:51:29.978013 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:30 crc kubenswrapper[4751]: I0130 22:51:30.984874 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:30 crc kubenswrapper[4751]: I0130 22:51:30.985126 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:31 crc kubenswrapper[4751]: I0130 22:51:31.044701 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:31 crc kubenswrapper[4751]: I0130 22:51:31.594253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:31 crc kubenswrapper[4751]: I0130 22:51:31.663755 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:32 crc kubenswrapper[4751]: I0130 22:51:32.782671 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="d55cd7e5-6799-4e1a-9f3b-a92937aca796" containerName="galera" probeResult="failure" output="command timed out" Jan 30 22:51:33 crc kubenswrapper[4751]: I0130 22:51:33.541033 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fdrt4" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" containerID="cri-o://f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" gracePeriod=2 Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.037537 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.163996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") pod \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.164346 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") pod \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.164480 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") pod \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\" (UID: \"fcac30a6-10b3-43ee-8e8e-8d2514b3f237\") " Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.165098 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities" (OuterVolumeSpecName: "utilities") pod "fcac30a6-10b3-43ee-8e8e-8d2514b3f237" (UID: "fcac30a6-10b3-43ee-8e8e-8d2514b3f237"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.166927 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.173431 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz" (OuterVolumeSpecName: "kube-api-access-w9svz") pod "fcac30a6-10b3-43ee-8e8e-8d2514b3f237" (UID: "fcac30a6-10b3-43ee-8e8e-8d2514b3f237"). InnerVolumeSpecName "kube-api-access-w9svz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.227932 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcac30a6-10b3-43ee-8e8e-8d2514b3f237" (UID: "fcac30a6-10b3-43ee-8e8e-8d2514b3f237"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.269970 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.270005 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9svz\" (UniqueName: \"kubernetes.io/projected/fcac30a6-10b3-43ee-8e8e-8d2514b3f237-kube-api-access-w9svz\") on node \"crc\" DevicePath \"\"" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556027 4751 generic.go:334] "Generic (PLEG): container finished" podID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" exitCode=0 Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556098 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1"} Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdrt4" event={"ID":"fcac30a6-10b3-43ee-8e8e-8d2514b3f237","Type":"ContainerDied","Data":"ff664fa803d6f8df0cb6e9976be9f069d58f53ff54f7ff683dbcf4a7928d6602"} Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556173 4751 scope.go:117] "RemoveContainer" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.556201 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdrt4" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.587369 4751 scope.go:117] "RemoveContainer" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.624477 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.636819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fdrt4"] Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.797260 4751 scope.go:117] "RemoveContainer" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849041 4751 scope.go:117] "RemoveContainer" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" Jan 30 22:51:34 crc kubenswrapper[4751]: E0130 22:51:34.849585 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1\": container with ID starting with f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1 not found: ID does not exist" containerID="f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849616 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1"} err="failed to get container status \"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1\": rpc error: code = NotFound desc = could not find container \"f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1\": container with ID starting with f3c934d93a7f0e920df5360eb84a406c65cb131340a5736aae985c8ef32524c1 not found: ID does not exist" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849637 4751 scope.go:117] "RemoveContainer" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" Jan 30 22:51:34 crc kubenswrapper[4751]: E0130 22:51:34.849901 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476\": container with ID starting with d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476 not found: ID does not exist" containerID="d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849926 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476"} err="failed to get container status \"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476\": rpc error: code = NotFound desc = could not find container \"d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476\": container with ID starting with d397daf1f9cb036fa054989b8a679919449dcee523e6dc6bdffab24bee374476 not found: ID does not exist" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.849940 4751 scope.go:117] "RemoveContainer" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" Jan 30 22:51:34 crc kubenswrapper[4751]: E0130 22:51:34.850151 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171\": container with ID starting with d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171 not found: ID does not exist" containerID="d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171" Jan 30 22:51:34 crc kubenswrapper[4751]: I0130 22:51:34.850183 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171"} err="failed to get container status \"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171\": rpc error: code = NotFound desc = could not find container \"d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171\": container with ID starting with d72545f8b3044ed4e0017422a6fae9672f2b0ebd610e6b3b65568b73a0ace171 not found: ID does not exist" Jan 30 22:51:35 crc kubenswrapper[4751]: I0130 22:51:35.992296 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" path="/var/lib/kubelet/pods/fcac30a6-10b3-43ee-8e8e-8d2514b3f237/volumes" Jan 30 22:51:42 crc kubenswrapper[4751]: I0130 22:51:42.976118 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:42 crc kubenswrapper[4751]: E0130 22:51:42.977202 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:51:56 crc kubenswrapper[4751]: I0130 22:51:56.976793 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:51:56 crc kubenswrapper[4751]: E0130 22:51:56.977877 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:08 crc kubenswrapper[4751]: I0130 22:52:08.977349 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:08 crc kubenswrapper[4751]: E0130 22:52:08.978458 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:23 crc kubenswrapper[4751]: I0130 22:52:23.976366 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:23 crc kubenswrapper[4751]: E0130 22:52:23.976919 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:35 crc kubenswrapper[4751]: I0130 22:52:35.976500 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:35 crc kubenswrapper[4751]: E0130 22:52:35.977508 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:46 crc kubenswrapper[4751]: I0130 22:52:46.976042 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:46 crc kubenswrapper[4751]: E0130 22:52:46.976952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:52:58 crc kubenswrapper[4751]: I0130 22:52:58.976600 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:52:58 crc kubenswrapper[4751]: E0130 22:52:58.977541 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:53:13 crc kubenswrapper[4751]: I0130 22:53:13.976441 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:53:13 crc kubenswrapper[4751]: E0130 22:53:13.977268 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:53:25 crc kubenswrapper[4751]: I0130 22:53:25.977933 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:53:26 crc kubenswrapper[4751]: I0130 22:53:26.353557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67"} Jan 30 22:55:09 crc kubenswrapper[4751]: I0130 22:55:09.597048 4751 generic.go:334] "Generic (PLEG): container finished" podID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerID="4a57649ebddefdd6cfb7979e8b07856c36ff49932c8103c4cfd06fb309f09454" exitCode=0 Jan 30 22:55:09 crc kubenswrapper[4751]: I0130 22:55:09.597165 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerDied","Data":"4a57649ebddefdd6cfb7979e8b07856c36ff49932c8103c4cfd06fb309f09454"} Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.080160 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142547 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142694 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142718 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142783 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.142889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.143097 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.143191 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") pod \"053bddc4-b1a1-4951-af33-6230acd3ee0b\" (UID: \"053bddc4-b1a1-4951-af33-6230acd3ee0b\") " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.145615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.147192 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data" (OuterVolumeSpecName: "config-data") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.150616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.154360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.158636 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k" (OuterVolumeSpecName: "kube-api-access-cnm9k") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "kube-api-access-cnm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.186548 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.187510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.195060 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.215067 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "053bddc4-b1a1-4951-af33-6230acd3ee0b" (UID: "053bddc4-b1a1-4951-af33-6230acd3ee0b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.257918 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.257972 4751 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.257989 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258011 4751 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/053bddc4-b1a1-4951-af33-6230acd3ee0b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258032 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258044 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/053bddc4-b1a1-4951-af33-6230acd3ee0b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.258059 4751 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/053bddc4-b1a1-4951-af33-6230acd3ee0b-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.269070 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.269106 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnm9k\" (UniqueName: \"kubernetes.io/projected/053bddc4-b1a1-4951-af33-6230acd3ee0b-kube-api-access-cnm9k\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.299442 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.371693 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.619829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"053bddc4-b1a1-4951-af33-6230acd3ee0b","Type":"ContainerDied","Data":"01b3d137ed8bb5af449d591205e958b031f5ad78d5d86311bd69b7e07f52d896"} Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.619871 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b3d137ed8bb5af449d591205e958b031f5ad78d5d86311bd69b7e07f52d896" Jan 30 22:55:11 crc kubenswrapper[4751]: I0130 22:55:11.619943 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.508002 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509514 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerName="tempest-tests-tempest-tests-runner" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509541 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerName="tempest-tests-tempest-tests-runner" Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509584 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-utilities" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509596 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-utilities" Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509622 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-content" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509637 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="extract-content" Jan 30 22:55:14 crc kubenswrapper[4751]: E0130 22:55:14.509668 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.509681 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.510067 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcac30a6-10b3-43ee-8e8e-8d2514b3f237" containerName="registry-server" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.510090 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="053bddc4-b1a1-4951-af33-6230acd3ee0b" containerName="tempest-tests-tempest-tests-runner" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.511581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.515254 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dvc9j" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.520521 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.650271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcjr\" (UniqueName: \"kubernetes.io/projected/3555a827-6ba2-4057-a142-ea2818a3d76e-kube-api-access-mjcjr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.650806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.752810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.752955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjcjr\" (UniqueName: \"kubernetes.io/projected/3555a827-6ba2-4057-a142-ea2818a3d76e-kube-api-access-mjcjr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.755682 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.779749 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjcjr\" (UniqueName: \"kubernetes.io/projected/3555a827-6ba2-4057-a142-ea2818a3d76e-kube-api-access-mjcjr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.784861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3555a827-6ba2-4057-a142-ea2818a3d76e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:14 crc kubenswrapper[4751]: I0130 22:55:14.845008 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 22:55:15 crc kubenswrapper[4751]: I0130 22:55:15.334907 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 22:55:15 crc kubenswrapper[4751]: I0130 22:55:15.667488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3555a827-6ba2-4057-a142-ea2818a3d76e","Type":"ContainerStarted","Data":"46b68f4efa8666c51d5131098e4b99383c0297af3a88d6037817cab6411c9901"} Jan 30 22:55:17 crc kubenswrapper[4751]: I0130 22:55:17.695891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3555a827-6ba2-4057-a142-ea2818a3d76e","Type":"ContainerStarted","Data":"bc343ed706a565bd0701fc90656578995b227aca5ffd5039bc97a9ed067084cd"} Jan 30 22:55:17 crc kubenswrapper[4751]: I0130 22:55:17.723657 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.2380563860000002 podStartE2EDuration="3.723634345s" podCreationTimestamp="2026-01-30 22:55:14 +0000 UTC" firstStartedPulling="2026-01-30 22:55:15.340525048 +0000 UTC m=+6054.086347697" lastFinishedPulling="2026-01-30 22:55:16.826103007 +0000 UTC m=+6055.571925656" observedRunningTime="2026-01-30 22:55:17.708640872 +0000 UTC m=+6056.454463531" watchObservedRunningTime="2026-01-30 22:55:17.723634345 +0000 UTC m=+6056.469457004" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.039267 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.046690 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.060600 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.205633 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.205936 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.206136 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.307955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.308900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.330624 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"redhat-operators-2lghv\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.378204 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:43 crc kubenswrapper[4751]: I0130 22:55:43.881666 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:55:44 crc kubenswrapper[4751]: I0130 22:55:44.057586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerStarted","Data":"d450517e757eb66afa3e8e00e500666c0b3dc5e030c10dd122a698c77bf8e56d"} Jan 30 22:55:45 crc kubenswrapper[4751]: I0130 22:55:45.070111 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" exitCode=0 Jan 30 22:55:45 crc kubenswrapper[4751]: I0130 22:55:45.070176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f"} Jan 30 22:55:46 crc kubenswrapper[4751]: I0130 22:55:46.084529 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerStarted","Data":"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155"} Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.150469 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" exitCode=0 Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.150551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155"} Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.939768 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.942165 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.945036 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-89qbh"/"openshift-service-ca.crt" Jan 30 22:55:51 crc kubenswrapper[4751]: I0130 22:55:51.953384 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-89qbh"/"kube-root-ca.crt" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.048161 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.058127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.058293 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.161279 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.161647 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.161917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.165959 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerStarted","Data":"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05"} Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.192065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"must-gather-xtff4\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.202136 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2lghv" podStartSLOduration=3.681503996 podStartE2EDuration="10.202114596s" podCreationTimestamp="2026-01-30 22:55:42 +0000 UTC" firstStartedPulling="2026-01-30 22:55:45.073057797 +0000 UTC m=+6083.818880446" lastFinishedPulling="2026-01-30 22:55:51.593668397 +0000 UTC m=+6090.339491046" observedRunningTime="2026-01-30 22:55:52.184539886 +0000 UTC m=+6090.930362545" watchObservedRunningTime="2026-01-30 22:55:52.202114596 +0000 UTC m=+6090.947937245" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.272869 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.871510 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.874581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.885373 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.972182 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.980403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.980544 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: I0130 22:55:52.980595 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:52 crc kubenswrapper[4751]: W0130 22:55:52.990358 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2d69f7_78aa_4618_a287_008258e34b47.slice/crio-575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549 WatchSource:0}: Error finding container 575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549: Status 404 returned error can't find the container with id 575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549 Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.083318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.083954 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.084048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.084601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.084646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.112877 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"redhat-marketplace-z9d6t\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.190818 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerStarted","Data":"575b4d3eb8cef83290428e5d8b415ca61275473bf7a1f9dc15448f290fbb2549"} Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.202530 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.381029 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.381533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:55:53 crc kubenswrapper[4751]: I0130 22:55:53.718624 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.126938 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.127323 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.239748 4751 generic.go:334] "Generic (PLEG): container finished" podID="822b4327-52bb-4f05-a391-3afff2cfe815" containerID="f2ee661fb62008f3e20783cee091203d1a7edba8ecec73406742c81805479ab7" exitCode=0 Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.241407 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"f2ee661fb62008f3e20783cee091203d1a7edba8ecec73406742c81805479ab7"} Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.241458 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerStarted","Data":"657149d18faffd7be299b21cac1edbfbf6c55b1e66a8d6b785bd39e1fe11d816"} Jan 30 22:55:54 crc kubenswrapper[4751]: I0130 22:55:54.441133 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:55:54 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:55:54 crc kubenswrapper[4751]: > Jan 30 22:55:55 crc kubenswrapper[4751]: I0130 22:55:55.255737 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerStarted","Data":"4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f"} Jan 30 22:55:57 crc kubenswrapper[4751]: I0130 22:55:57.281644 4751 generic.go:334] "Generic (PLEG): container finished" podID="822b4327-52bb-4f05-a391-3afff2cfe815" containerID="4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f" exitCode=0 Jan 30 22:55:57 crc kubenswrapper[4751]: I0130 22:55:57.281841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f"} Jan 30 22:55:58 crc kubenswrapper[4751]: I0130 22:55:58.300817 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerStarted","Data":"d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9"} Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.312933 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerStarted","Data":"7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2"} Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.315160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerStarted","Data":"f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596"} Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.344884 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9d6t" podStartSLOduration=3.334271574 podStartE2EDuration="7.34485499s" podCreationTimestamp="2026-01-30 22:55:52 +0000 UTC" firstStartedPulling="2026-01-30 22:55:54.249523255 +0000 UTC m=+6092.995345904" lastFinishedPulling="2026-01-30 22:55:58.260106671 +0000 UTC m=+6097.005929320" observedRunningTime="2026-01-30 22:55:59.338094995 +0000 UTC m=+6098.083917664" watchObservedRunningTime="2026-01-30 22:55:59.34485499 +0000 UTC m=+6098.090677649" Jan 30 22:55:59 crc kubenswrapper[4751]: I0130 22:55:59.360655 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-89qbh/must-gather-xtff4" podStartSLOduration=3.510704043 podStartE2EDuration="8.360636971s" podCreationTimestamp="2026-01-30 22:55:51 +0000 UTC" firstStartedPulling="2026-01-30 22:55:52.999024657 +0000 UTC m=+6091.744847306" lastFinishedPulling="2026-01-30 22:55:57.848957585 +0000 UTC m=+6096.594780234" observedRunningTime="2026-01-30 22:55:59.354911054 +0000 UTC m=+6098.100733703" watchObservedRunningTime="2026-01-30 22:55:59.360636971 +0000 UTC m=+6098.106459620" Jan 30 22:56:03 crc kubenswrapper[4751]: I0130 22:56:03.204736 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:03 crc kubenswrapper[4751]: I0130 22:56:03.205216 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.260866 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z9d6t" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:04 crc kubenswrapper[4751]: > Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.442639 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:04 crc kubenswrapper[4751]: > Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.589371 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/crc-debug-q2xqd"] Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.591176 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.593930 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-89qbh"/"default-dockercfg-62sr4" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.686272 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.687018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.789167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.789276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.791022 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.810998 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"crc-debug-q2xqd\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: I0130 22:56:04.915753 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:56:04 crc kubenswrapper[4751]: W0130 22:56:04.958465 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b77c61_26de_47b9_a360_961173e352c9.slice/crio-456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671 WatchSource:0}: Error finding container 456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671: Status 404 returned error can't find the container with id 456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671 Jan 30 22:56:05 crc kubenswrapper[4751]: I0130 22:56:05.381453 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" event={"ID":"01b77c61-26de-47b9-a360-961173e352c9","Type":"ContainerStarted","Data":"456800dcd9c3fd3c4100ffbf0b8217bc5765221ee9f1391a2fc60f7b32123671"} Jan 30 22:56:13 crc kubenswrapper[4751]: I0130 22:56:13.271350 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:13 crc kubenswrapper[4751]: I0130 22:56:13.351176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:14 crc kubenswrapper[4751]: I0130 22:56:14.185509 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:56:14 crc kubenswrapper[4751]: I0130 22:56:14.426945 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:14 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:14 crc kubenswrapper[4751]: > Jan 30 22:56:14 crc kubenswrapper[4751]: I0130 22:56:14.481863 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9d6t" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" containerID="cri-o://7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2" gracePeriod=2 Jan 30 22:56:15 crc kubenswrapper[4751]: I0130 22:56:15.496198 4751 generic.go:334] "Generic (PLEG): container finished" podID="822b4327-52bb-4f05-a391-3afff2cfe815" containerID="7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2" exitCode=0 Jan 30 22:56:15 crc kubenswrapper[4751]: I0130 22:56:15.496252 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2"} Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.917844 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.922029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") pod \"822b4327-52bb-4f05-a391-3afff2cfe815\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.922170 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") pod \"822b4327-52bb-4f05-a391-3afff2cfe815\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.922389 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") pod \"822b4327-52bb-4f05-a391-3afff2cfe815\" (UID: \"822b4327-52bb-4f05-a391-3afff2cfe815\") " Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.924079 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities" (OuterVolumeSpecName: "utilities") pod "822b4327-52bb-4f05-a391-3afff2cfe815" (UID: "822b4327-52bb-4f05-a391-3afff2cfe815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.932286 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk" (OuterVolumeSpecName: "kube-api-access-w6wxk") pod "822b4327-52bb-4f05-a391-3afff2cfe815" (UID: "822b4327-52bb-4f05-a391-3afff2cfe815"). InnerVolumeSpecName "kube-api-access-w6wxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:17 crc kubenswrapper[4751]: I0130 22:56:17.951232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "822b4327-52bb-4f05-a391-3afff2cfe815" (UID: "822b4327-52bb-4f05-a391-3afff2cfe815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.025210 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.025250 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6wxk\" (UniqueName: \"kubernetes.io/projected/822b4327-52bb-4f05-a391-3afff2cfe815-kube-api-access-w6wxk\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.025261 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822b4327-52bb-4f05-a391-3afff2cfe815-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.531292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9d6t" event={"ID":"822b4327-52bb-4f05-a391-3afff2cfe815","Type":"ContainerDied","Data":"657149d18faffd7be299b21cac1edbfbf6c55b1e66a8d6b785bd39e1fe11d816"} Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.531632 4751 scope.go:117] "RemoveContainer" containerID="7dfdf62a347d67bf4af34a06215c821cdbe98fcbb138c7e84242629f3eca78c2" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.531314 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9d6t" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.537301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" event={"ID":"01b77c61-26de-47b9-a360-961173e352c9","Type":"ContainerStarted","Data":"c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8"} Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.565398 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.582489 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9d6t"] Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.594166 4751 scope.go:117] "RemoveContainer" containerID="4381b59162725a7444c1506a2669a6cdd0777d370e2c8d19b2767ff2dc05806f" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.595874 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" podStartSLOduration=2.235963955 podStartE2EDuration="14.595860542s" podCreationTimestamp="2026-01-30 22:56:04 +0000 UTC" firstStartedPulling="2026-01-30 22:56:04.961024258 +0000 UTC m=+6103.706846907" lastFinishedPulling="2026-01-30 22:56:17.320920845 +0000 UTC m=+6116.066743494" observedRunningTime="2026-01-30 22:56:18.573495991 +0000 UTC m=+6117.319318640" watchObservedRunningTime="2026-01-30 22:56:18.595860542 +0000 UTC m=+6117.341683201" Jan 30 22:56:18 crc kubenswrapper[4751]: I0130 22:56:18.622632 4751 scope.go:117] "RemoveContainer" containerID="f2ee661fb62008f3e20783cee091203d1a7edba8ecec73406742c81805479ab7" Jan 30 22:56:19 crc kubenswrapper[4751]: I0130 22:56:19.989055 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" path="/var/lib/kubelet/pods/822b4327-52bb-4f05-a391-3afff2cfe815/volumes" Jan 30 22:56:24 crc kubenswrapper[4751]: I0130 22:56:24.126479 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:56:24 crc kubenswrapper[4751]: I0130 22:56:24.127088 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:56:24 crc kubenswrapper[4751]: I0130 22:56:24.437078 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:24 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:24 crc kubenswrapper[4751]: > Jan 30 22:56:34 crc kubenswrapper[4751]: I0130 22:56:34.439096 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" probeResult="failure" output=< Jan 30 22:56:34 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 22:56:34 crc kubenswrapper[4751]: > Jan 30 22:56:43 crc kubenswrapper[4751]: I0130 22:56:43.454251 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:43 crc kubenswrapper[4751]: I0130 22:56:43.523514 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:44 crc kubenswrapper[4751]: I0130 22:56:44.208138 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:56:45 crc kubenswrapper[4751]: I0130 22:56:45.276818 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2lghv" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" containerID="cri-o://af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" gracePeriod=2 Jan 30 22:56:45 crc kubenswrapper[4751]: I0130 22:56:45.894782 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.032524 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") pod \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.032804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") pod \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.032892 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") pod \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\" (UID: \"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c\") " Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.033353 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities" (OuterVolumeSpecName: "utilities") pod "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" (UID: "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.033945 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.043504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm" (OuterVolumeSpecName: "kube-api-access-72lpm") pod "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" (UID: "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c"). InnerVolumeSpecName "kube-api-access-72lpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.136493 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lpm\" (UniqueName: \"kubernetes.io/projected/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-kube-api-access-72lpm\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.147806 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" (UID: "3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.239497 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290827 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" exitCode=0 Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290871 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05"} Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290902 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2lghv" event={"ID":"3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c","Type":"ContainerDied","Data":"d450517e757eb66afa3e8e00e500666c0b3dc5e030c10dd122a698c77bf8e56d"} Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.290925 4751 scope.go:117] "RemoveContainer" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.291086 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2lghv" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.334461 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.334431 4751 scope.go:117] "RemoveContainer" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.348588 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2lghv"] Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.358061 4751 scope.go:117] "RemoveContainer" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.415498 4751 scope.go:117] "RemoveContainer" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" Jan 30 22:56:46 crc kubenswrapper[4751]: E0130 22:56:46.416030 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05\": container with ID starting with af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05 not found: ID does not exist" containerID="af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416092 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05"} err="failed to get container status \"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05\": rpc error: code = NotFound desc = could not find container \"af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05\": container with ID starting with af1ce8f85915397c417bab40a5f0c6d4ea6cb5f742bba643f5ba76786c50fc05 not found: ID does not exist" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416215 4751 scope.go:117] "RemoveContainer" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" Jan 30 22:56:46 crc kubenswrapper[4751]: E0130 22:56:46.416721 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155\": container with ID starting with 0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155 not found: ID does not exist" containerID="0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416756 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155"} err="failed to get container status \"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155\": rpc error: code = NotFound desc = could not find container \"0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155\": container with ID starting with 0ddc7523ae323025c6035da055db9c099b9b434c59df935923f0eeb866d35155 not found: ID does not exist" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.416778 4751 scope.go:117] "RemoveContainer" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" Jan 30 22:56:46 crc kubenswrapper[4751]: E0130 22:56:46.417194 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f\": container with ID starting with 2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f not found: ID does not exist" containerID="2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f" Jan 30 22:56:46 crc kubenswrapper[4751]: I0130 22:56:46.417286 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f"} err="failed to get container status \"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f\": rpc error: code = NotFound desc = could not find container \"2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f\": container with ID starting with 2ec7029e3614aaaf9779f2863efc1aa2a974de2154be7eae73a1e2c993e44e4f not found: ID does not exist" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.002478 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" path="/var/lib/kubelet/pods/3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c/volumes" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.434722 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435817 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.435851 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435896 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.435927 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-content" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435954 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.435966 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.435994 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436005 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.436020 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436032 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: E0130 22:56:48.436080 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436090 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="extract-utilities" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436529 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce2ef1f-4ddb-4e3c-84e6-0f5add1d8a2c" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.436577 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="822b4327-52bb-4f05-a391-3afff2cfe815" containerName="registry-server" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.438857 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.456929 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.494475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.494717 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.494811 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597428 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597680 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.597988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.598063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.622677 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"certified-operators-8nlg5\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:48 crc kubenswrapper[4751]: I0130 22:56:48.775751 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:49 crc kubenswrapper[4751]: I0130 22:56:49.328653 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.343632 4751 generic.go:334] "Generic (PLEG): container finished" podID="c004278d-44c5-46da-9372-3773f2bd0c80" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" exitCode=0 Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.343860 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628"} Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.344010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerStarted","Data":"6041280b56f65fdcdd4871be580592741f2e8226384f8c7ef10d68a5b4288cd0"} Jan 30 22:56:50 crc kubenswrapper[4751]: I0130 22:56:50.348076 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 22:56:51 crc kubenswrapper[4751]: I0130 22:56:51.356068 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerStarted","Data":"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732"} Jan 30 22:56:53 crc kubenswrapper[4751]: I0130 22:56:53.379902 4751 generic.go:334] "Generic (PLEG): container finished" podID="c004278d-44c5-46da-9372-3773f2bd0c80" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" exitCode=0 Jan 30 22:56:53 crc kubenswrapper[4751]: I0130 22:56:53.379998 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732"} Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.126883 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.127417 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.127463 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.128541 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.129016 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67" gracePeriod=600 Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.390936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerStarted","Data":"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f"} Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.394449 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67" exitCode=0 Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.394490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67"} Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.394526 4751 scope.go:117] "RemoveContainer" containerID="ca52e048774eb2e5d849ba06566277d7b5d76bea793907ff16db41b47c07f232" Jan 30 22:56:54 crc kubenswrapper[4751]: I0130 22:56:54.500235 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8nlg5" podStartSLOduration=3.000526477 podStartE2EDuration="6.500209329s" podCreationTimestamp="2026-01-30 22:56:48 +0000 UTC" firstStartedPulling="2026-01-30 22:56:50.347220321 +0000 UTC m=+6149.093042980" lastFinishedPulling="2026-01-30 22:56:53.846903183 +0000 UTC m=+6152.592725832" observedRunningTime="2026-01-30 22:56:54.473060576 +0000 UTC m=+6153.218883225" watchObservedRunningTime="2026-01-30 22:56:54.500209329 +0000 UTC m=+6153.246031988" Jan 30 22:56:55 crc kubenswrapper[4751]: I0130 22:56:55.406528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b"} Jan 30 22:56:58 crc kubenswrapper[4751]: I0130 22:56:58.776337 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:58 crc kubenswrapper[4751]: I0130 22:56:58.776913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:58 crc kubenswrapper[4751]: I0130 22:56:58.829644 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:59 crc kubenswrapper[4751]: I0130 22:56:59.508617 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:56:59 crc kubenswrapper[4751]: I0130 22:56:59.574105 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:57:01 crc kubenswrapper[4751]: I0130 22:57:01.487374 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8nlg5" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" containerID="cri-o://463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" gracePeriod=2 Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.136695 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.224743 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") pod \"c004278d-44c5-46da-9372-3773f2bd0c80\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.224844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") pod \"c004278d-44c5-46da-9372-3773f2bd0c80\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.225008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") pod \"c004278d-44c5-46da-9372-3773f2bd0c80\" (UID: \"c004278d-44c5-46da-9372-3773f2bd0c80\") " Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.230243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities" (OuterVolumeSpecName: "utilities") pod "c004278d-44c5-46da-9372-3773f2bd0c80" (UID: "c004278d-44c5-46da-9372-3773f2bd0c80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.234655 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b" (OuterVolumeSpecName: "kube-api-access-w5x5b") pod "c004278d-44c5-46da-9372-3773f2bd0c80" (UID: "c004278d-44c5-46da-9372-3773f2bd0c80"). InnerVolumeSpecName "kube-api-access-w5x5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.268358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c004278d-44c5-46da-9372-3773f2bd0c80" (UID: "c004278d-44c5-46da-9372-3773f2bd0c80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.327698 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5x5b\" (UniqueName: \"kubernetes.io/projected/c004278d-44c5-46da-9372-3773f2bd0c80-kube-api-access-w5x5b\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.327745 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.327759 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c004278d-44c5-46da-9372-3773f2bd0c80-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552067 4751 generic.go:334] "Generic (PLEG): container finished" podID="c004278d-44c5-46da-9372-3773f2bd0c80" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" exitCode=0 Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552125 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f"} Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8nlg5" event={"ID":"c004278d-44c5-46da-9372-3773f2bd0c80","Type":"ContainerDied","Data":"6041280b56f65fdcdd4871be580592741f2e8226384f8c7ef10d68a5b4288cd0"} Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552171 4751 scope.go:117] "RemoveContainer" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.552373 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8nlg5" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.579189 4751 scope.go:117] "RemoveContainer" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.599623 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.608414 4751 scope.go:117] "RemoveContainer" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.623227 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8nlg5"] Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.667741 4751 scope.go:117] "RemoveContainer" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" Jan 30 22:57:02 crc kubenswrapper[4751]: E0130 22:57:02.668180 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f\": container with ID starting with 463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f not found: ID does not exist" containerID="463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668219 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f"} err="failed to get container status \"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f\": rpc error: code = NotFound desc = could not find container \"463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f\": container with ID starting with 463f138b388c87d47e6e8f48e58f354e49ab3310ba323ddaa509ccb44e7b4d0f not found: ID does not exist" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668244 4751 scope.go:117] "RemoveContainer" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" Jan 30 22:57:02 crc kubenswrapper[4751]: E0130 22:57:02.668481 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732\": container with ID starting with 8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732 not found: ID does not exist" containerID="8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668503 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732"} err="failed to get container status \"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732\": rpc error: code = NotFound desc = could not find container \"8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732\": container with ID starting with 8c03900c4c85ebffcec258e1dae8d8fefedd5bf403d712725a8faf7ba6c8a732 not found: ID does not exist" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668518 4751 scope.go:117] "RemoveContainer" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" Jan 30 22:57:02 crc kubenswrapper[4751]: E0130 22:57:02.668747 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628\": container with ID starting with 8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628 not found: ID does not exist" containerID="8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628" Jan 30 22:57:02 crc kubenswrapper[4751]: I0130 22:57:02.668780 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628"} err="failed to get container status \"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628\": rpc error: code = NotFound desc = could not find container \"8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628\": container with ID starting with 8d4c1f79e1a366d6c84305e6d2fbd5a9ecc7e8f5de5799a6f10128cf1a39a628 not found: ID does not exist" Jan 30 22:57:03 crc kubenswrapper[4751]: I0130 22:57:03.987043 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" path="/var/lib/kubelet/pods/c004278d-44c5-46da-9372-3773f2bd0c80/volumes" Jan 30 22:57:12 crc kubenswrapper[4751]: I0130 22:57:12.731829 4751 generic.go:334] "Generic (PLEG): container finished" podID="01b77c61-26de-47b9-a360-961173e352c9" containerID="c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8" exitCode=0 Jan 30 22:57:12 crc kubenswrapper[4751]: I0130 22:57:12.731937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" event={"ID":"01b77c61-26de-47b9-a360-961173e352c9","Type":"ContainerDied","Data":"c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8"} Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.862886 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.907232 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-q2xqd"] Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.918428 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-q2xqd"] Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.921156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") pod \"01b77c61-26de-47b9-a360-961173e352c9\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.921417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") pod \"01b77c61-26de-47b9-a360-961173e352c9\" (UID: \"01b77c61-26de-47b9-a360-961173e352c9\") " Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.921533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host" (OuterVolumeSpecName: "host") pod "01b77c61-26de-47b9-a360-961173e352c9" (UID: "01b77c61-26de-47b9-a360-961173e352c9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.922425 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01b77c61-26de-47b9-a360-961173e352c9-host\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.928562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j" (OuterVolumeSpecName: "kube-api-access-rwq9j") pod "01b77c61-26de-47b9-a360-961173e352c9" (UID: "01b77c61-26de-47b9-a360-961173e352c9"). InnerVolumeSpecName "kube-api-access-rwq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:13 crc kubenswrapper[4751]: I0130 22:57:13.994738 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b77c61-26de-47b9-a360-961173e352c9" path="/var/lib/kubelet/pods/01b77c61-26de-47b9-a360-961173e352c9/volumes" Jan 30 22:57:14 crc kubenswrapper[4751]: I0130 22:57:14.025204 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwq9j\" (UniqueName: \"kubernetes.io/projected/01b77c61-26de-47b9-a360-961173e352c9-kube-api-access-rwq9j\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:14 crc kubenswrapper[4751]: I0130 22:57:14.754410 4751 scope.go:117] "RemoveContainer" containerID="c96911f8f05d80b4e318ee559b561fbac28c80da80508a7352f791c9f10292c8" Jan 30 22:57:14 crc kubenswrapper[4751]: I0130 22:57:14.754439 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-q2xqd" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.128763 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/crc-debug-jpv4n"] Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129481 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-utilities" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129494 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-utilities" Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129510 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129516 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129532 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b77c61-26de-47b9-a360-961173e352c9" containerName="container-00" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129538 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b77c61-26de-47b9-a360-961173e352c9" containerName="container-00" Jan 30 22:57:15 crc kubenswrapper[4751]: E0130 22:57:15.129553 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-content" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129559 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="extract-content" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129807 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b77c61-26de-47b9-a360-961173e352c9" containerName="container-00" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.129818 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c004278d-44c5-46da-9372-3773f2bd0c80" containerName="registry-server" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.130657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.133144 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-89qbh"/"default-dockercfg-62sr4" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.257984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.258208 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.361207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.361310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.361435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.378557 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"crc-debug-jpv4n\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.454695 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:15 crc kubenswrapper[4751]: W0130 22:57:15.500916 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fcbb32f_05b2_4221_898b_83822813a738.slice/crio-407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe WatchSource:0}: Error finding container 407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe: Status 404 returned error can't find the container with id 407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe Jan 30 22:57:15 crc kubenswrapper[4751]: I0130 22:57:15.767452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" event={"ID":"1fcbb32f-05b2-4221-898b-83822813a738","Type":"ContainerStarted","Data":"407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe"} Jan 30 22:57:16 crc kubenswrapper[4751]: I0130 22:57:16.783087 4751 generic.go:334] "Generic (PLEG): container finished" podID="1fcbb32f-05b2-4221-898b-83822813a738" containerID="77378602a044fe43cd54d596511e6b12c14155716b7a67523c049f4f81292b13" exitCode=0 Jan 30 22:57:16 crc kubenswrapper[4751]: I0130 22:57:16.783147 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" event={"ID":"1fcbb32f-05b2-4221-898b-83822813a738","Type":"ContainerDied","Data":"77378602a044fe43cd54d596511e6b12c14155716b7a67523c049f4f81292b13"} Jan 30 22:57:17 crc kubenswrapper[4751]: I0130 22:57:17.947843 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.023258 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") pod \"1fcbb32f-05b2-4221-898b-83822813a738\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.023707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") pod \"1fcbb32f-05b2-4221-898b-83822813a738\" (UID: \"1fcbb32f-05b2-4221-898b-83822813a738\") " Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.024149 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host" (OuterVolumeSpecName: "host") pod "1fcbb32f-05b2-4221-898b-83822813a738" (UID: "1fcbb32f-05b2-4221-898b-83822813a738"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.026684 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1fcbb32f-05b2-4221-898b-83822813a738-host\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.030613 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz" (OuterVolumeSpecName: "kube-api-access-ln8lz") pod "1fcbb32f-05b2-4221-898b-83822813a738" (UID: "1fcbb32f-05b2-4221-898b-83822813a738"). InnerVolumeSpecName "kube-api-access-ln8lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.128376 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8lz\" (UniqueName: \"kubernetes.io/projected/1fcbb32f-05b2-4221-898b-83822813a738-kube-api-access-ln8lz\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.815166 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" event={"ID":"1fcbb32f-05b2-4221-898b-83822813a738","Type":"ContainerDied","Data":"407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe"} Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.815205 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-jpv4n" Jan 30 22:57:18 crc kubenswrapper[4751]: I0130 22:57:18.815215 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="407d0704b0d1230a5ffd41fc5a1a7acf15dabbdb5d64eeddeea64e42546abfbe" Jan 30 22:57:19 crc kubenswrapper[4751]: I0130 22:57:19.276952 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-jpv4n"] Jan 30 22:57:19 crc kubenswrapper[4751]: I0130 22:57:19.287286 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-jpv4n"] Jan 30 22:57:19 crc kubenswrapper[4751]: I0130 22:57:19.990246 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fcbb32f-05b2-4221-898b-83822813a738" path="/var/lib/kubelet/pods/1fcbb32f-05b2-4221-898b-83822813a738/volumes" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.471356 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-89qbh/crc-debug-9zdxm"] Jan 30 22:57:20 crc kubenswrapper[4751]: E0130 22:57:20.471906 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcbb32f-05b2-4221-898b-83822813a738" containerName="container-00" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.471921 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcbb32f-05b2-4221-898b-83822813a738" containerName="container-00" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.472132 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fcbb32f-05b2-4221-898b-83822813a738" containerName="container-00" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.473000 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.474920 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-89qbh"/"default-dockercfg-62sr4" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.593864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.594321 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.696144 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.696448 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.696707 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.715968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"crc-debug-9zdxm\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.791521 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:20 crc kubenswrapper[4751]: I0130 22:57:20.840184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" event={"ID":"e60a3a88-2975-458f-a5e9-422a6c519f65","Type":"ContainerStarted","Data":"737ab55ebd2abf50c9643b0c5e1d3b6670f98d11955015081ef7846fe31b41ce"} Jan 30 22:57:21 crc kubenswrapper[4751]: I0130 22:57:21.889037 4751 generic.go:334] "Generic (PLEG): container finished" podID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerID="be6e18572bbebc1a7aec700bd2eb90d12dc04a78e2daff85c59a029c18a1fcc3" exitCode=0 Jan 30 22:57:21 crc kubenswrapper[4751]: I0130 22:57:21.889291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" event={"ID":"e60a3a88-2975-458f-a5e9-422a6c519f65","Type":"ContainerDied","Data":"be6e18572bbebc1a7aec700bd2eb90d12dc04a78e2daff85c59a029c18a1fcc3"} Jan 30 22:57:22 crc kubenswrapper[4751]: I0130 22:57:22.035736 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-9zdxm"] Jan 30 22:57:22 crc kubenswrapper[4751]: I0130 22:57:22.047384 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/crc-debug-9zdxm"] Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.025308 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.169743 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") pod \"e60a3a88-2975-458f-a5e9-422a6c519f65\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.169848 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") pod \"e60a3a88-2975-458f-a5e9-422a6c519f65\" (UID: \"e60a3a88-2975-458f-a5e9-422a6c519f65\") " Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.170239 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host" (OuterVolumeSpecName: "host") pod "e60a3a88-2975-458f-a5e9-422a6c519f65" (UID: "e60a3a88-2975-458f-a5e9-422a6c519f65"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.170685 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e60a3a88-2975-458f-a5e9-422a6c519f65-host\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.182005 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2" (OuterVolumeSpecName: "kube-api-access-vzjc2") pod "e60a3a88-2975-458f-a5e9-422a6c519f65" (UID: "e60a3a88-2975-458f-a5e9-422a6c519f65"). InnerVolumeSpecName "kube-api-access-vzjc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.272956 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjc2\" (UniqueName: \"kubernetes.io/projected/e60a3a88-2975-458f-a5e9-422a6c519f65-kube-api-access-vzjc2\") on node \"crc\" DevicePath \"\"" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.921125 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737ab55ebd2abf50c9643b0c5e1d3b6670f98d11955015081ef7846fe31b41ce" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.921177 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/crc-debug-9zdxm" Jan 30 22:57:23 crc kubenswrapper[4751]: I0130 22:57:23.990760 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" path="/var/lib/kubelet/pods/e60a3a88-2975-458f-a5e9-422a6c519f65/volumes" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.682078 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-api/0.log" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.854185 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-evaluator/0.log" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.855070 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-listener/0.log" Jan 30 22:57:48 crc kubenswrapper[4751]: I0130 22:57:48.918279 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_0c9eccf2-9252-4f35-9aff-56f0e15102a1/aodh-notifier/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.173581 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b7f497ffb-fkntp_1a2838e6-7563-4e97-893d-58d8619b780b/barbican-api-log/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.186923 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b7f497ffb-fkntp_1a2838e6-7563-4e97-893d-58d8619b780b/barbican-api/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.358986 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b859c9db-tvldd_334843b7-3c66-42fa-8880-4337946df593/barbican-keystone-listener/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.508176 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b859c9db-tvldd_334843b7-3c66-42fa-8880-4337946df593/barbican-keystone-listener-log/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.559487 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd66f57b7-5jqls_76562ec1-fb40-4590-9d96-f05cafc13640/barbican-worker/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.601875 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd66f57b7-5jqls_76562ec1-fb40-4590-9d96-f05cafc13640/barbican-worker-log/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.761791 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jxgsl_25d1f8e8-75ed-46ae-b674-87f34c4edbfa/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:49 crc kubenswrapper[4751]: I0130 22:57:49.902991 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/ceilometer-central-agent/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.008985 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/proxy-httpd/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.059882 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/ceilometer-notification-agent/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.126164 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c69dc070-7de6-4681-a44b-6e2007a7f671/sg-core/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.308061 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e741273e-caa0-4a2c-9ed0-6bae195052ce/cinder-api-log/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.312820 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e741273e-caa0-4a2c-9ed0-6bae195052ce/cinder-api/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.482010 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56/cinder-scheduler/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.574293 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_927e4c2b-4fb5-4ccb-adeb-8847ea0c4c56/probe/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.646374 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rfrjz_a21b5781-ce12-434c-9f38-47bf5f6ad332/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.800953 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-czgz2_39c3c6d6-5ce3-4522-acc1-1ebbe5748f0d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:50 crc kubenswrapper[4751]: I0130 22:57:50.901762 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-7h4pb_1eb1b0d1-2407-440a-826b-b5158aab8be3/init/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.113948 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-7h4pb_1eb1b0d1-2407-440a-826b-b5158aab8be3/init/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.171586 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-czw8p_b45d4d88-6b91-4bfc-9619-68fdb7d90f05/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.195289 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-7h4pb_1eb1b0d1-2407-440a-826b-b5158aab8be3/dnsmasq-dns/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.394260 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cef73daf-a49c-4b32-8ebc-fe0adf90df58/glance-httpd/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.426773 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cef73daf-a49c-4b32-8ebc-fe0adf90df58/glance-log/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.611406 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4dcf400d-5171-4388-bfbc-18d62a106a12/glance-log/0.log" Jan 30 22:57:51 crc kubenswrapper[4751]: I0130 22:57:51.614062 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4dcf400d-5171-4388-bfbc-18d62a106a12/glance-httpd/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.347873 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-68c4b8fdd-wvfwg_ce637680-0e89-4089-bbb7-704117a5dcb0/heat-api/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.522603 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-75666c8dc5-6rmsl_3100f81b-465d-42f8-9bbd-88e0aecbdc56/heat-cfnapi/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.606743 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7ccc7fc744-trd9b_2465732f-6109-4d66-84c4-f08a6a1ac472/heat-engine/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.664799 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5nz6d_d9b249ee-25bd-4b25-aaaf-57c3a55dad1f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.743857 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-trgr7_aa80e137-3a03-4857-9ec0-aa2f9b58df0d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:52 crc kubenswrapper[4751]: I0130 22:57:52.951075 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496841-qnsrj_ec292c3e-470e-4f61-92e9-4e2c8098f879/keystone-cron/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.069685 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dc0cb07e-2f77-49e2-931f-c896c3962f9d/kube-state-metrics/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.260240 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9j62f_64c0e484-536b-4bf5-9f35-2bfc04b14133/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.369389 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-55986d9fc9-zjsx4_aab674da-e1ff-4881-9432-fad6b85111f2/keystone-api/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.405127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-jn6hm_61149618-7cc3-4dd6-b61a-0fb8226f2cc1/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.547997 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_e7f85043-bc84-41e2-9f14-a08f96da06f2/mysqld-exporter/0.log" Jan 30 22:57:53 crc kubenswrapper[4751]: I0130 22:57:53.984254 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6989c95c85-6thsl_68910b8d-2ec3-4b7c-956c-e3d3518042cf/neutron-httpd/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.026408 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-p77mj_9d2edd75-7066-43c1-9636-149a176ee575/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.117744 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6989c95c85-6thsl_68910b8d-2ec3-4b7c-956c-e3d3518042cf/neutron-api/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.648058 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9bb304d7-db8e-4943-b0bc-d30a4332df91/nova-cell0-conductor-conductor/0.log" Jan 30 22:57:54 crc kubenswrapper[4751]: I0130 22:57:54.928020 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e3c7d82f-3209-44cf-a463-9affaab3de75/nova-api-log/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.001672 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6c1153d5-e025-439d-9799-8bf38014a585/nova-cell1-conductor-conductor/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.269340 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_150d4911-b366-4c81-b4fa-b5c5e8cadc78/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.320363 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gjscv_7165caae-e471-463b-9f66-be7fb4c7c463/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.365560 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e3c7d82f-3209-44cf-a463-9affaab3de75/nova-api-api/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.569132 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_179951f5-39be-43d7-a2fa-3c6f04555760/nova-metadata-log/0.log" Jan 30 22:57:55 crc kubenswrapper[4751]: I0130 22:57:55.976566 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32/mysql-bootstrap/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.027734 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_977b9205-4c23-4ff0-9193-5938e4b87c64/nova-scheduler-scheduler/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.153389 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32/mysql-bootstrap/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.319271 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a2ea2a70-ac95-42dc-91fa-6d6c7e8ace32/galera/0.log" Jan 30 22:57:56 crc kubenswrapper[4751]: I0130 22:57:56.433885 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d55cd7e5-6799-4e1a-9f3b-a92937aca796/mysql-bootstrap/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.041269 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d55cd7e5-6799-4e1a-9f3b-a92937aca796/mysql-bootstrap/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.059186 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d55cd7e5-6799-4e1a-9f3b-a92937aca796/galera/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.305275 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_af93872a-62a1-407c-9932-2afb4313f457/openstackclient/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.362012 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g9s48_fbc382fd-1513-4137-b801-5627cc5886ea/ovn-controller/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.625938 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6bddb_e60cf673-3513-4af6-ac72-280908e95405/openstack-network-exporter/0.log" Jan 30 22:57:57 crc kubenswrapper[4751]: I0130 22:57:57.883091 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovsdb-server-init/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.072367 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_179951f5-39be-43d7-a2fa-3c6f04555760/nova-metadata-metadata/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.190757 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovsdb-server-init/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.193313 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovsdb-server/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.197838 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f4rx8_071bab49-34f0-4fef-849e-c2530b4c423c/ovs-vswitchd/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.463479 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f31a7def-755f-49e8-bf97-7e155bcc5113/openstack-network-exporter/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.465936 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-g7v98_43548d7f-01a0-4905-a26d-424ba948cbe8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.646859 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f31a7def-755f-49e8-bf97-7e155bcc5113/ovn-northd/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.717091 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47614a4a-f824-4eb4-9f46-bf1ab137d364/openstack-network-exporter/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.886819 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47614a4a-f824-4eb4-9f46-bf1ab137d364/ovsdbserver-nb/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.886996 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f8708be-4bf5-440d-a6e3-876acf844253/openstack-network-exporter/0.log" Jan 30 22:57:58 crc kubenswrapper[4751]: I0130 22:57:58.984151 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1f8708be-4bf5-440d-a6e3-876acf844253/ovsdbserver-sb/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.329915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bcbb59b46-2xhmj_0cb6a4c8-d098-48b5-8ffe-ff46a64bc377/placement-api/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.447429 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6bcbb59b46-2xhmj_0cb6a4c8-d098-48b5-8ffe-ff46a64bc377/placement-log/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.563484 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/init-config-reloader/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.706973 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/init-config-reloader/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.763863 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/config-reloader/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.765724 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/prometheus/0.log" Jan 30 22:57:59 crc kubenswrapper[4751]: I0130 22:57:59.812672 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e7af95c-7ba2-4e0b-9947-795d9629744c/thanos-sidecar/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.037158 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa019efa-4067-4bd5-b370-12f6a4e6b856/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.209081 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa019efa-4067-4bd5-b370-12f6a4e6b856/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.310524 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_aa019efa-4067-4bd5-b370-12f6a4e6b856/rabbitmq/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.354233 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ab0c22c-f078-413c-ac94-9e543a02c3fb/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.623135 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ab0c22c-f078-413c-ac94-9e543a02c3fb/rabbitmq/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.657806 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4ab0c22c-f078-413c-ac94-9e543a02c3fb/setup-container/0.log" Jan 30 22:58:00 crc kubenswrapper[4751]: I0130 22:58:00.711775 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_279dd57b-8f7d-4730-a9ee-cf124f8c0d52/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:00.999756 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_279dd57b-8f7d-4730-a9ee-cf124f8c0d52/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.164397 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_279dd57b-8f7d-4730-a9ee-cf124f8c0d52/rabbitmq/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.255576 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_29afad92-51c9-45a8-a6a0-ed64925f91f3/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.646938 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_29afad92-51c9-45a8-a6a0-ed64925f91f3/setup-container/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.705121 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_29afad92-51c9-45a8-a6a0-ed64925f91f3/rabbitmq/0.log" Jan 30 22:58:01 crc kubenswrapper[4751]: I0130 22:58:01.841998 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fhztp_0562f716-fdf2-41ff-bb36-5474fa9be5c0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.038484 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2bbbp_a4b9ecbd-4cf2-4554-b209-d7a421499f08/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.265194 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-q8c7w_37b91419-687f-4907-888d-9344d1e8602a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.309619 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zbttg_10f27009-b34c-43f0-999f-64c2e2316013/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.593473 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gtfdq_1c9c26ff-407a-4595-8406-e3a0d46450aa/ssh-known-hosts-edpm-deployment/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.901483 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58dc6df599-nmmxw_b9f02a32-18ed-4030-94d6-16f4d0feff52/proxy-server/0.log" Jan 30 22:58:02 crc kubenswrapper[4751]: I0130 22:58:02.928360 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vvq25_70af95fb-5ca8-4482-a1bc-81b1891e0da7/swift-ring-rebalance/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.080022 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58dc6df599-nmmxw_b9f02a32-18ed-4030-94d6-16f4d0feff52/proxy-httpd/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.206096 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-reaper/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.271578 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-auditor/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.426766 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-server/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.448011 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/account-replicator/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.538690 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-auditor/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.609512 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-replicator/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.704690 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-server/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.763734 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/container-updater/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.831146 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-expirer/0.log" Jan 30 22:58:03 crc kubenswrapper[4751]: I0130 22:58:03.868175 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-auditor/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.018407 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-server/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.019732 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-replicator/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.102318 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/object-updater/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.154073 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/rsync/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.267919 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4f6a1442-f7f7-499a-a7d5-c354d76ba9d5/swift-recon-cron/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.413311 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2qxzx_93c2956e-910c-4604-a9ba-86289f854a59/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.556510 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhz2v_ac636140-8b68-474a-a7f9-7d46e6a22de0/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:04 crc kubenswrapper[4751]: I0130 22:58:04.810084 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3555a827-6ba2-4057-a142-ea2818a3d76e/test-operator-logs-container/0.log" Jan 30 22:58:05 crc kubenswrapper[4751]: I0130 22:58:05.090302 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-plxr5_538f9f69-1642-4944-a5e1-7348a104c5e6/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 22:58:05 crc kubenswrapper[4751]: I0130 22:58:05.647647 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_053bddc4-b1a1-4951-af33-6230acd3ee0b/tempest-tests-tempest-tests-runner/0.log" Jan 30 22:58:10 crc kubenswrapper[4751]: I0130 22:58:10.718494 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_14c5f0f0-6d85-4d60-9daa-7fa3b401a884/memcached/0.log" Jan 30 22:58:35 crc kubenswrapper[4751]: I0130 22:58:35.634421 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-7mpjw_236db419-e197-4a85-ab49-58cf38babea6/manager/0.log" Jan 30 22:58:35 crc kubenswrapper[4751]: I0130 22:58:35.810911 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/util/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.019926 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/util/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.034086 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/pull/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.070379 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/pull/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.190127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/pull/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.232213 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/util/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.235044 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c2193e9bc444b90e210f090c3afd8986af29f27f24ea0af818bf2b2bfc7zf9m_8fed4afd-9214-4ec9-816d-2ba6213f2f89/extract/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.426449 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-6fg4r_9003ffe6-59a3-4c7c-96d0-d129a9339247/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.492804 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-ph5lf_f8cf0eb3-a93d-4462-b5ac-bbaaebf6daf9/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.682090 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-b65fl_0fd5051a-5be4-4336-af86-9674469b76a0/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.807005 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-jxkmf_3fae5204-d3a1-4e39-ac3d-d28c8a55c7db/manager/0.log" Jan 30 22:58:36 crc kubenswrapper[4751]: I0130 22:58:36.903041 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-hsbbr_0b3a96d4-f5fc-47be-9c28-47239b2488c1/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.346541 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-52vr2_2d6f1acc-6416-44ae-9082-3ebe16dce448/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.438941 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-n2shb_9a88f139-89db-4b3a-8fea-bf951e59f564/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.603704 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-sw6zv_b2777bff-2cca-4f41-8655-a737f13b4885/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.661413 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-7sk5v_694b29bc-994c-4983-81c7-b32d47db553b/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.838952 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-xk52h_1ad347ea-d2ce-4a1e-912a-8471445396f7/manager/0.log" Jan 30 22:58:37 crc kubenswrapper[4751]: I0130 22:58:37.934792 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-9vvgb_4a416a7c-3094-46ef-8370-9cad7446339b/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.152380 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-tbp7n_e596dcc9-7f31-4312-99e3-7d86d318ef9d/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.159264 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-d6slz_fcf49997-888f-4e58-99e7-f1f677dc7111/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.321387 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dqs6wk_0026e471-8226-4038-8c52-f0add2877c8d/manager/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.542233 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55fdcd6c79-9hzxh_4b543295-a1a6-40ad-8b74-0ee6fdeb66c3/operator/0.log" Jan 30 22:58:38 crc kubenswrapper[4751]: I0130 22:58:38.798844 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lw6gm_bd6eaa60-4995-4ace-8ab0-a880f09cbee0/registry-server/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.073420 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-c7tj6_c711cf07-a695-447a-8d01-147b10e9059f/manager/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.278788 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dx8wk_ace28553-76bc-4472-a671-788e1fb9a1ff/manager/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.421116 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v8vch_a986231c-2119-4a13-801d-51119db5d365/operator/0.log" Jan 30 22:58:39 crc kubenswrapper[4751]: I0130 22:58:39.708715 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-r6smn_0c86abfd-77a9-4388-8b7f-b61bb378f7cb/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.128642 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-sc9gq_3d59cc79-1a37-434a-a04b-156739f469d7/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.386205 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-gcvgx_cbae5889-938b-4211-94a6-de960df2f95d/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.802259 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6749767b8f-62rqr_3b9cc057-30d7-4a03-8c76-a1ca7200dbae/manager/0.log" Jan 30 22:58:40 crc kubenswrapper[4751]: I0130 22:58:40.952571 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d48698d88-jbmh6_dac6f1f3-8549-488c-bb63-aa980f4a1282/manager/0.log" Jan 30 22:58:54 crc kubenswrapper[4751]: I0130 22:58:54.127390 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:58:54 crc kubenswrapper[4751]: I0130 22:58:54.127960 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:01 crc kubenswrapper[4751]: I0130 22:59:01.490731 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xf2m8_357257a0-2b96-4833-84cb-1c4326c34e61/control-plane-machine-set-operator/0.log" Jan 30 22:59:01 crc kubenswrapper[4751]: I0130 22:59:01.731711 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nk5rn_ebb4c857-4f54-440f-81d7-74eadc588099/kube-rbac-proxy/0.log" Jan 30 22:59:01 crc kubenswrapper[4751]: I0130 22:59:01.768849 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-nk5rn_ebb4c857-4f54-440f-81d7-74eadc588099/machine-api-operator/0.log" Jan 30 22:59:15 crc kubenswrapper[4751]: I0130 22:59:15.387217 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mbzjn_9e1b9b2c-a431-4bc0-abb3-1db5cc759fbd/cert-manager-controller/0.log" Jan 30 22:59:15 crc kubenswrapper[4751]: I0130 22:59:15.507816 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9k9rg_04bdab63-06c1-475f-8351-a2ccc4292f25/cert-manager-cainjector/0.log" Jan 30 22:59:15 crc kubenswrapper[4751]: I0130 22:59:15.617607 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-shbmk_9acdc588-bef3-4ce2-bf06-afea86273408/cert-manager-webhook/0.log" Jan 30 22:59:24 crc kubenswrapper[4751]: I0130 22:59:24.126910 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:59:24 crc kubenswrapper[4751]: I0130 22:59:24.127434 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.373215 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kxkfz_2806dd41-f23b-466a-a187-4689685f6b86/nmstate-console-plugin/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.559870 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-d95cp_eea5deed-9d07-45b2-b400-64b7c2336994/nmstate-handler/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.664914 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rfrtx_f0ccd951-df7f-452f-b340-64fa7c9f9916/kube-rbac-proxy/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.734692 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rfrtx_f0ccd951-df7f-452f-b340-64fa7c9f9916/nmstate-metrics/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.823554 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-k49vc_c9f603b5-de3a-4d5e-acc1-6da32a99dcaa/nmstate-operator/0.log" Jan 30 22:59:28 crc kubenswrapper[4751]: I0130 22:59:28.913230 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7hqmv_be191f8d-d8ce-4f29-95f1-1278c108ca11/nmstate-webhook/0.log" Jan 30 22:59:41 crc kubenswrapper[4751]: I0130 22:59:41.112796 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/kube-rbac-proxy/0.log" Jan 30 22:59:41 crc kubenswrapper[4751]: I0130 22:59:41.142420 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/manager/0.log" Jan 30 22:59:53 crc kubenswrapper[4751]: I0130 22:59:53.954536 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5nv4n_96f3e554-fbfc-4716-b6ee-0913394521fa/prometheus-operator/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.126827 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.126994 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.127075 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.128425 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.128520 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" gracePeriod=600 Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.195790 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-n4cw4_c0edc270-3913-41f7-9218-32549d1d3dea/prometheus-operator-admission-webhook/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.281184 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-vpng2_16999302-ac18-4e1c-b3f7-a2bf3f7605aa/prometheus-operator-admission-webhook/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: E0130 22:59:54.356978 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.510906 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-p97jc_0d7cf074-b623-45d0-ac84-c1e52a626885/observability-ui-dashboards/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.521896 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" exitCode=0 Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.521941 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b"} Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.521975 4751 scope.go:117] "RemoveContainer" containerID="1f994498b8705c718253f1d686dfa142a31e491cf05bc7e00a9d3f4b2c57ea67" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.522883 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 22:59:54 crc kubenswrapper[4751]: E0130 22:59:54.523209 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.541229 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lhkl2_3ee6b659-c8c9-4f07-a897-c69db812f880/operator/0.log" Jan 30 22:59:54 crc kubenswrapper[4751]: I0130 22:59:54.720990 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l498d_7472790e-3a0e-40dd-909c-4301ba84d884/perses-operator/0.log" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.254780 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z"] Jan 30 23:00:00 crc kubenswrapper[4751]: E0130 23:00:00.255845 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerName="container-00" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.255861 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerName="container-00" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.256100 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60a3a88-2975-458f-a5e9-422a6c519f65" containerName="container-00" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.260724 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.265742 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.269046 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.284695 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z"] Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.363216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.363462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.363490 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.465755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.466037 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.466589 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.467711 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.508296 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.508922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"collect-profiles-29496900-whf7z\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:00 crc kubenswrapper[4751]: I0130 23:00:00.583913 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:01 crc kubenswrapper[4751]: I0130 23:00:01.745027 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z"] Jan 30 23:00:02 crc kubenswrapper[4751]: I0130 23:00:02.616097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerStarted","Data":"3c940c159a2bba2aeb4ad2d7c11d57a3d87f27896ed07f7333b30e4d2c0c80be"} Jan 30 23:00:02 crc kubenswrapper[4751]: I0130 23:00:02.617618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerStarted","Data":"130d9c001698551e1af4f1787b7af891f92d385e4e0bd3b2ad037906d0073a05"} Jan 30 23:00:02 crc kubenswrapper[4751]: I0130 23:00:02.636754 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" podStartSLOduration=2.636731638 podStartE2EDuration="2.636731638s" podCreationTimestamp="2026-01-30 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:00:02.634715473 +0000 UTC m=+6341.380538132" watchObservedRunningTime="2026-01-30 23:00:02.636731638 +0000 UTC m=+6341.382554287" Jan 30 23:00:03 crc kubenswrapper[4751]: I0130 23:00:03.627493 4751 generic.go:334] "Generic (PLEG): container finished" podID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerID="3c940c159a2bba2aeb4ad2d7c11d57a3d87f27896ed07f7333b30e4d2c0c80be" exitCode=0 Jan 30 23:00:03 crc kubenswrapper[4751]: I0130 23:00:03.627593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerDied","Data":"3c940c159a2bba2aeb4ad2d7c11d57a3d87f27896ed07f7333b30e4d2c0c80be"} Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.108159 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.298916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") pod \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.299002 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") pod \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.299156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") pod \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\" (UID: \"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d\") " Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.299972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" (UID: "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.306955 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" (UID: "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.306972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph" (OuterVolumeSpecName: "kube-api-access-wpsph") pod "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" (UID: "797d18c7-90e7-4a29-b4bd-c8ad9148ea0d"). InnerVolumeSpecName "kube-api-access-wpsph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.402713 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsph\" (UniqueName: \"kubernetes.io/projected/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-kube-api-access-wpsph\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.402754 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.402778 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797d18c7-90e7-4a29-b4bd-c8ad9148ea0d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.661502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" event={"ID":"797d18c7-90e7-4a29-b4bd-c8ad9148ea0d","Type":"ContainerDied","Data":"130d9c001698551e1af4f1787b7af891f92d385e4e0bd3b2ad037906d0073a05"} Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.661549 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130d9c001698551e1af4f1787b7af891f92d385e4e0bd3b2ad037906d0073a05" Jan 30 23:00:05 crc kubenswrapper[4751]: I0130 23:00:05.661552 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496900-whf7z" Jan 30 23:00:06 crc kubenswrapper[4751]: I0130 23:00:06.217542 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 23:00:06 crc kubenswrapper[4751]: I0130 23:00:06.229133 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496855-ncq7m"] Jan 30 23:00:06 crc kubenswrapper[4751]: I0130 23:00:06.976810 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:06 crc kubenswrapper[4751]: E0130 23:00:06.977205 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:07 crc kubenswrapper[4751]: I0130 23:00:07.990800 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9671fd-4ee5-4071-8dd4-86a335928d79" path="/var/lib/kubelet/pods/3f9671fd-4ee5-4071-8dd4-86a335928d79/volumes" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.322433 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-tg4r2_c60111a8-d193-4bbb-af4b-a5f286a4b04b/cluster-logging-operator/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.538312 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-f6llv_d1f22c66-daa2-4dd7-8394-ceab983464e2/collector/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.621127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_82dfa01d-f00f-4e1c-ab66-d8fbc48eaf76/loki-compactor/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.775547 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-mc9wc_d066c155-02e0-448e-9d4c-f578a36e553b/loki-distributor/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.809114 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-r5mfq_653268f5-1827-4109-a68b-3cc7670e65f8/gateway/0.log" Jan 30 23:00:10 crc kubenswrapper[4751]: I0130 23:00:10.994825 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-r5mfq_653268f5-1827-4109-a68b-3cc7670e65f8/opa/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.018626 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-rbqpr_326140a4-6f2a-48c1-b5a2-0b02ce345c50/gateway/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.051100 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f4fcfb764-rbqpr_326140a4-6f2a-48c1-b5a2-0b02ce345c50/opa/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.180711 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_12f75dd3-7d12-4b19-8e7d-cfef30b3f0ac/loki-index-gateway/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.319187 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_4f247b61-4ba2-4c4e-8d97-c16900635ddc/loki-ingester/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.437718 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-gbf6p_096a86f8-72dc-4bd5-a2b4-48b67a26d792/loki-querier/0.log" Jan 30 23:00:11 crc kubenswrapper[4751]: I0130 23:00:11.552725 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-7cdp9_8083b036-5700-420a-ad3f-1e471813194e/loki-query-frontend/0.log" Jan 30 23:00:19 crc kubenswrapper[4751]: I0130 23:00:19.975728 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:19 crc kubenswrapper[4751]: E0130 23:00:19.976580 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:25 crc kubenswrapper[4751]: I0130 23:00:25.964785 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p8nst_41e79790-830a-48bb-93b6-dd55dc050acf/kube-rbac-proxy/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.193117 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p8nst_41e79790-830a-48bb-93b6-dd55dc050acf/controller/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.210935 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.478213 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.500847 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.539342 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.542919 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.701407 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.750243 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.782544 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.812789 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:26 crc kubenswrapper[4751]: I0130 23:00:26.996547 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-frr-files/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.063165 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-reloader/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.075995 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/controller/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.087170 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/cp-metrics/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.276645 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/frr-metrics/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.341207 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/kube-rbac-proxy-frr/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.386677 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/kube-rbac-proxy/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.500793 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/reloader/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.635199 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2zl97_f8544a86-1b67-4c2e-9b56-ca708c47b4e8/frr-k8s-webhook-server/0.log" Jan 30 23:00:27 crc kubenswrapper[4751]: I0130 23:00:27.989538 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6697664f96-w8tr4_088ac2b9-a8fd-4aa9-854d-a62a9ecd5e9a/manager/0.log" Jan 30 23:00:28 crc kubenswrapper[4751]: I0130 23:00:28.118354 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-597477f4b5-q868h_61545af5-1133-4922-a477-9155212b642c/webhook-server/0.log" Jan 30 23:00:28 crc kubenswrapper[4751]: I0130 23:00:28.389664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zqbmp_e9fc7f0b-0bab-4435-82d8-b78841d64687/kube-rbac-proxy/0.log" Jan 30 23:00:29 crc kubenswrapper[4751]: I0130 23:00:29.429855 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9zjh6_e5325dd6-6a3a-4e0b-9db6-03c42ea5d1e4/frr/0.log" Jan 30 23:00:29 crc kubenswrapper[4751]: I0130 23:00:29.802605 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zqbmp_e9fc7f0b-0bab-4435-82d8-b78841d64687/speaker/0.log" Jan 30 23:00:30 crc kubenswrapper[4751]: I0130 23:00:30.351214 4751 scope.go:117] "RemoveContainer" containerID="17631e0b0228d44951b111801652ba8aead8eab296a100d22a49d18b40b57ded" Jan 30 23:00:34 crc kubenswrapper[4751]: I0130 23:00:34.976141 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:34 crc kubenswrapper[4751]: E0130 23:00:34.977044 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.474500 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/util/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.607455 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/util/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.770107 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/pull/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.802172 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/pull/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.880958 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/pull/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.947737 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/util/0.log" Jan 30 23:00:42 crc kubenswrapper[4751]: I0130 23:00:42.989606 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs2gpw_00263593-80af-4a40-a2c4-538f582434c4/extract/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.151470 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/util/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.388405 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/pull/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.388674 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/pull/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.404989 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/util/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.614955 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/util/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.638631 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/extract/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.647975 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71347blh_eac36070-4c04-460f-bfbb-e77659bad07e/pull/0.log" Jan 30 23:00:43 crc kubenswrapper[4751]: I0130 23:00:43.856747 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.018405 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.062388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.135288 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.263987 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.335925 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.546612 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.561874 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pscx6_fd9d691f-2785-4248-80d8-903f36ff7f1f/registry-server/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.778787 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-utilities/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.803969 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-content/0.log" Jan 30 23:00:44 crc kubenswrapper[4751]: I0130 23:00:44.821717 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.032223 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.041143 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/extract-utilities/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.231279 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s9tfl_7804f857-fb14-4305-97cc-c966621a55b2/marketplace-operator/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.387890 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-utilities/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.565398 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-utilities/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.737664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.737854 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-content/0.log" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.975755 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:45 crc kubenswrapper[4751]: E0130 23:00:45.976128 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:00:45 crc kubenswrapper[4751]: I0130 23:00:45.987868 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvqpq_f675e6ee-15d0-4fa7-94ec-c08976e45a20/registry-server/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.007306 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.007691 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/extract-utilities/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.244450 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n5g7x_b187a442-317c-42c9-ba1a-ff41e0b9bc90/registry-server/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.275022 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-utilities/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.492554 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-utilities/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.493884 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.545178 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.718336 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-content/0.log" Jan 30 23:00:46 crc kubenswrapper[4751]: I0130 23:00:46.779523 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/extract-utilities/0.log" Jan 30 23:00:47 crc kubenswrapper[4751]: I0130 23:00:47.798212 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zps7r_fac62ab3-6625-4680-a70b-235f054baa64/registry-server/0.log" Jan 30 23:00:57 crc kubenswrapper[4751]: I0130 23:00:57.982642 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:00:57 crc kubenswrapper[4751]: E0130 23:00:57.983886 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.166144 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496901-zql87"] Jan 30 23:01:00 crc kubenswrapper[4751]: E0130 23:01:00.168606 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerName="collect-profiles" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.168742 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerName="collect-profiles" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.169119 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="797d18c7-90e7-4a29-b4bd-c8ad9148ea0d" containerName="collect-profiles" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.170313 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.192709 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496901-zql87"] Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297589 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.297778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.400410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.400530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.400565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.401661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.408426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.423617 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.424299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.424486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"keystone-cron-29496901-zql87\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.495238 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.524152 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-vpng2_16999302-ac18-4e1c-b3f7-a2bf3f7605aa/prometheus-operator-admission-webhook/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.571297 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-858b879-n4cw4_c0edc270-3913-41f7-9218-32549d1d3dea/prometheus-operator-admission-webhook/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.594111 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5nv4n_96f3e554-fbfc-4716-b6ee-0913394521fa/prometheus-operator/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.788492 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-p97jc_0d7cf074-b623-45d0-ac84-c1e52a626885/observability-ui-dashboards/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.825016 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lhkl2_3ee6b659-c8c9-4f07-a897-c69db812f880/operator/0.log" Jan 30 23:01:00 crc kubenswrapper[4751]: I0130 23:01:00.836438 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-l498d_7472790e-3a0e-40dd-909c-4301ba84d884/perses-operator/0.log" Jan 30 23:01:01 crc kubenswrapper[4751]: I0130 23:01:01.301868 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496901-zql87"] Jan 30 23:01:02 crc kubenswrapper[4751]: I0130 23:01:02.297211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerStarted","Data":"6ad35f97dd21fbfc1789403681e18f795425647b698999dcbb5b5807329516f6"} Jan 30 23:01:02 crc kubenswrapper[4751]: I0130 23:01:02.297881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerStarted","Data":"60c2b303ea32ec95d9debf697aeaa801f1a06e41659995e958951d8e46f97e86"} Jan 30 23:01:02 crc kubenswrapper[4751]: I0130 23:01:02.321642 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496901-zql87" podStartSLOduration=2.321622795 podStartE2EDuration="2.321622795s" podCreationTimestamp="2026-01-30 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 23:01:02.310244694 +0000 UTC m=+6401.056067343" watchObservedRunningTime="2026-01-30 23:01:02.321622795 +0000 UTC m=+6401.067445444" Jan 30 23:01:06 crc kubenswrapper[4751]: I0130 23:01:06.340821 4751 generic.go:334] "Generic (PLEG): container finished" podID="a91608ea-b09c-4747-9249-51a7aa22de08" containerID="6ad35f97dd21fbfc1789403681e18f795425647b698999dcbb5b5807329516f6" exitCode=0 Jan 30 23:01:06 crc kubenswrapper[4751]: I0130 23:01:06.340888 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerDied","Data":"6ad35f97dd21fbfc1789403681e18f795425647b698999dcbb5b5807329516f6"} Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.807062 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.886644 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.886714 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.886754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.887015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") pod \"a91608ea-b09c-4747-9249-51a7aa22de08\" (UID: \"a91608ea-b09c-4747-9249-51a7aa22de08\") " Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.925479 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.939598 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz" (OuterVolumeSpecName: "kube-api-access-r8fwz") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "kube-api-access-r8fwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.994392 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:07 crc kubenswrapper[4751]: I0130 23:01:07.994662 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8fwz\" (UniqueName: \"kubernetes.io/projected/a91608ea-b09c-4747-9249-51a7aa22de08-kube-api-access-r8fwz\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.010077 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.062130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data" (OuterVolumeSpecName: "config-data") pod "a91608ea-b09c-4747-9249-51a7aa22de08" (UID: "a91608ea-b09c-4747-9249-51a7aa22de08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.096738 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.096770 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91608ea-b09c-4747-9249-51a7aa22de08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.367198 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496901-zql87" event={"ID":"a91608ea-b09c-4747-9249-51a7aa22de08","Type":"ContainerDied","Data":"60c2b303ea32ec95d9debf697aeaa801f1a06e41659995e958951d8e46f97e86"} Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.367236 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c2b303ea32ec95d9debf697aeaa801f1a06e41659995e958951d8e46f97e86" Jan 30 23:01:08 crc kubenswrapper[4751]: I0130 23:01:08.367262 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496901-zql87" Jan 30 23:01:10 crc kubenswrapper[4751]: I0130 23:01:10.976301 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:10 crc kubenswrapper[4751]: E0130 23:01:10.977127 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:14 crc kubenswrapper[4751]: I0130 23:01:14.836212 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/kube-rbac-proxy/0.log" Jan 30 23:01:14 crc kubenswrapper[4751]: I0130 23:01:14.936631 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7988bf4897-spq9h_d32a4de7-a9b5-408d-b678-bcc0244cceee/manager/0.log" Jan 30 23:01:23 crc kubenswrapper[4751]: I0130 23:01:23.975871 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:23 crc kubenswrapper[4751]: E0130 23:01:23.976610 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.093581 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:34 crc kubenswrapper[4751]: E0130 23:01:34.095265 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91608ea-b09c-4747-9249-51a7aa22de08" containerName="keystone-cron" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.095288 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91608ea-b09c-4747-9249-51a7aa22de08" containerName="keystone-cron" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.095668 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91608ea-b09c-4747-9249-51a7aa22de08" containerName="keystone-cron" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.097596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.154404 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.248398 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.248462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.248542 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.350085 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.350269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.350306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.351460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.351486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.375957 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"community-operators-pzbtf\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:34 crc kubenswrapper[4751]: I0130 23:01:34.445603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.288995 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.655612 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" exitCode=0 Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.655661 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea"} Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.655720 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerStarted","Data":"6ec1a8fb44f2780198407768eba5cde4184587063cd63e961da2205d4fa2e8c7"} Jan 30 23:01:35 crc kubenswrapper[4751]: I0130 23:01:35.976280 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:35 crc kubenswrapper[4751]: E0130 23:01:35.977792 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:37 crc kubenswrapper[4751]: I0130 23:01:37.679835 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerStarted","Data":"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8"} Jan 30 23:01:38 crc kubenswrapper[4751]: I0130 23:01:38.707463 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" exitCode=0 Jan 30 23:01:38 crc kubenswrapper[4751]: I0130 23:01:38.707812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8"} Jan 30 23:01:39 crc kubenswrapper[4751]: I0130 23:01:39.720995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerStarted","Data":"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8"} Jan 30 23:01:39 crc kubenswrapper[4751]: I0130 23:01:39.749512 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pzbtf" podStartSLOduration=2.221068455 podStartE2EDuration="5.749492163s" podCreationTimestamp="2026-01-30 23:01:34 +0000 UTC" firstStartedPulling="2026-01-30 23:01:35.658486859 +0000 UTC m=+6434.404309508" lastFinishedPulling="2026-01-30 23:01:39.186910567 +0000 UTC m=+6437.932733216" observedRunningTime="2026-01-30 23:01:39.741940097 +0000 UTC m=+6438.487762756" watchObservedRunningTime="2026-01-30 23:01:39.749492163 +0000 UTC m=+6438.495314822" Jan 30 23:01:44 crc kubenswrapper[4751]: I0130 23:01:44.447378 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:44 crc kubenswrapper[4751]: I0130 23:01:44.448042 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:45 crc kubenswrapper[4751]: I0130 23:01:45.495407 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pzbtf" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" probeResult="failure" output=< Jan 30 23:01:45 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:01:45 crc kubenswrapper[4751]: > Jan 30 23:01:46 crc kubenswrapper[4751]: I0130 23:01:46.976238 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:46 crc kubenswrapper[4751]: E0130 23:01:46.976820 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:01:49 crc kubenswrapper[4751]: E0130 23:01:49.956253 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:51958->38.102.83.39:41127: write tcp 38.102.83.39:51958->38.102.83.39:41127: write: broken pipe Jan 30 23:01:54 crc kubenswrapper[4751]: I0130 23:01:54.510414 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:54 crc kubenswrapper[4751]: I0130 23:01:54.562459 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:54 crc kubenswrapper[4751]: I0130 23:01:54.750681 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:55 crc kubenswrapper[4751]: I0130 23:01:55.897974 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pzbtf" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" containerID="cri-o://8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" gracePeriod=2 Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.470698 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.590425 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") pod \"66a67eb5-04b0-4bcd-814d-e59031703d25\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.590721 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") pod \"66a67eb5-04b0-4bcd-814d-e59031703d25\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.590829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") pod \"66a67eb5-04b0-4bcd-814d-e59031703d25\" (UID: \"66a67eb5-04b0-4bcd-814d-e59031703d25\") " Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.592167 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities" (OuterVolumeSpecName: "utilities") pod "66a67eb5-04b0-4bcd-814d-e59031703d25" (UID: "66a67eb5-04b0-4bcd-814d-e59031703d25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.604715 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq" (OuterVolumeSpecName: "kube-api-access-hz9rq") pod "66a67eb5-04b0-4bcd-814d-e59031703d25" (UID: "66a67eb5-04b0-4bcd-814d-e59031703d25"). InnerVolumeSpecName "kube-api-access-hz9rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.653810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66a67eb5-04b0-4bcd-814d-e59031703d25" (UID: "66a67eb5-04b0-4bcd-814d-e59031703d25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.694894 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz9rq\" (UniqueName: \"kubernetes.io/projected/66a67eb5-04b0-4bcd-814d-e59031703d25-kube-api-access-hz9rq\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.695443 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.695636 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a67eb5-04b0-4bcd-814d-e59031703d25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913593 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" exitCode=0 Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8"} Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzbtf" event={"ID":"66a67eb5-04b0-4bcd-814d-e59031703d25","Type":"ContainerDied","Data":"6ec1a8fb44f2780198407768eba5cde4184587063cd63e961da2205d4fa2e8c7"} Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913700 4751 scope.go:117] "RemoveContainer" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.913702 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzbtf" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.949002 4751 scope.go:117] "RemoveContainer" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.990284 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:56 crc kubenswrapper[4751]: I0130 23:01:56.995890 4751 scope.go:117] "RemoveContainer" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.007145 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pzbtf"] Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.125483 4751 scope.go:117] "RemoveContainer" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" Jan 30 23:01:57 crc kubenswrapper[4751]: E0130 23:01:57.137361 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8\": container with ID starting with 8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8 not found: ID does not exist" containerID="8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.137424 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8"} err="failed to get container status \"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8\": rpc error: code = NotFound desc = could not find container \"8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8\": container with ID starting with 8c515bda47d2ccd57f53dd3e5c9a6578a8a8721434831d49d4334d68a31a9db8 not found: ID does not exist" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.137449 4751 scope.go:117] "RemoveContainer" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" Jan 30 23:01:57 crc kubenswrapper[4751]: E0130 23:01:57.141493 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8\": container with ID starting with 6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8 not found: ID does not exist" containerID="6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.141683 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8"} err="failed to get container status \"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8\": rpc error: code = NotFound desc = could not find container \"6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8\": container with ID starting with 6aca60af051abbfa2e99b1f3059957cb4f174731379fe14f5e3da878e27ca6e8 not found: ID does not exist" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.141783 4751 scope.go:117] "RemoveContainer" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" Jan 30 23:01:57 crc kubenswrapper[4751]: E0130 23:01:57.147477 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea\": container with ID starting with 3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea not found: ID does not exist" containerID="3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea" Jan 30 23:01:57 crc kubenswrapper[4751]: I0130 23:01:57.147752 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea"} err="failed to get container status \"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea\": rpc error: code = NotFound desc = could not find container \"3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea\": container with ID starting with 3ac9deb2c6302801541213cb3eea491c1284f486d5a6d1433bc5d56708c0acea not found: ID does not exist" Jan 30 23:01:58 crc kubenswrapper[4751]: I0130 23:01:58.010283 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" path="/var/lib/kubelet/pods/66a67eb5-04b0-4bcd-814d-e59031703d25/volumes" Jan 30 23:01:58 crc kubenswrapper[4751]: I0130 23:01:58.976020 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:01:58 crc kubenswrapper[4751]: E0130 23:01:58.976680 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:10 crc kubenswrapper[4751]: I0130 23:02:10.976395 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:10 crc kubenswrapper[4751]: E0130 23:02:10.977223 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:22 crc kubenswrapper[4751]: I0130 23:02:22.976283 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:22 crc kubenswrapper[4751]: E0130 23:02:22.978666 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:35 crc kubenswrapper[4751]: I0130 23:02:35.976002 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:35 crc kubenswrapper[4751]: E0130 23:02:35.977302 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:02:50 crc kubenswrapper[4751]: I0130 23:02:50.975668 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:02:50 crc kubenswrapper[4751]: E0130 23:02:50.976467 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:04 crc kubenswrapper[4751]: I0130 23:03:04.976015 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:04 crc kubenswrapper[4751]: E0130 23:03:04.976996 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:19 crc kubenswrapper[4751]: I0130 23:03:19.976409 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:19 crc kubenswrapper[4751]: E0130 23:03:19.977343 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:22 crc kubenswrapper[4751]: I0130 23:03:22.930554 4751 generic.go:334] "Generic (PLEG): container finished" podID="bc2d69f7-78aa-4618-a287-008258e34b47" containerID="d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9" exitCode=0 Jan 30 23:03:22 crc kubenswrapper[4751]: I0130 23:03:22.930668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-89qbh/must-gather-xtff4" event={"ID":"bc2d69f7-78aa-4618-a287-008258e34b47","Type":"ContainerDied","Data":"d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9"} Jan 30 23:03:22 crc kubenswrapper[4751]: I0130 23:03:22.932315 4751 scope.go:117] "RemoveContainer" containerID="d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9" Jan 30 23:03:23 crc kubenswrapper[4751]: I0130 23:03:23.420187 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/gather/0.log" Jan 30 23:03:26 crc kubenswrapper[4751]: E0130 23:03:26.105470 4751 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.39:59878->38.102.83.39:41127: read tcp 38.102.83.39:59878->38.102.83.39:41127: read: connection reset by peer Jan 30 23:03:30 crc kubenswrapper[4751]: I0130 23:03:30.976068 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:30 crc kubenswrapper[4751]: E0130 23:03:30.976819 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:31 crc kubenswrapper[4751]: I0130 23:03:31.181177 4751 scope.go:117] "RemoveContainer" containerID="be6e18572bbebc1a7aec700bd2eb90d12dc04a78e2daff85c59a029c18a1fcc3" Jan 30 23:03:31 crc kubenswrapper[4751]: I0130 23:03:31.224857 4751 scope.go:117] "RemoveContainer" containerID="77378602a044fe43cd54d596511e6b12c14155716b7a67523c049f4f81292b13" Jan 30 23:03:32 crc kubenswrapper[4751]: I0130 23:03:32.866495 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 23:03:32 crc kubenswrapper[4751]: I0130 23:03:32.885516 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-89qbh/must-gather-xtff4"] Jan 30 23:03:32 crc kubenswrapper[4751]: I0130 23:03:32.894359 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-89qbh/must-gather-xtff4" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" containerID="cri-o://f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596" gracePeriod=2 Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.068723 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/copy/0.log" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.070831 4751 generic.go:334] "Generic (PLEG): container finished" podID="bc2d69f7-78aa-4618-a287-008258e34b47" containerID="f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596" exitCode=143 Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.589536 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/copy/0.log" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.590047 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.678903 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") pod \"bc2d69f7-78aa-4618-a287-008258e34b47\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.684251 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") pod \"bc2d69f7-78aa-4618-a287-008258e34b47\" (UID: \"bc2d69f7-78aa-4618-a287-008258e34b47\") " Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.710375 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv" (OuterVolumeSpecName: "kube-api-access-5d2qv") pod "bc2d69f7-78aa-4618-a287-008258e34b47" (UID: "bc2d69f7-78aa-4618-a287-008258e34b47"). InnerVolumeSpecName "kube-api-access-5d2qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.790266 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d2qv\" (UniqueName: \"kubernetes.io/projected/bc2d69f7-78aa-4618-a287-008258e34b47-kube-api-access-5d2qv\") on node \"crc\" DevicePath \"\"" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.886588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bc2d69f7-78aa-4618-a287-008258e34b47" (UID: "bc2d69f7-78aa-4618-a287-008258e34b47"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.892956 4751 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bc2d69f7-78aa-4618-a287-008258e34b47-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 23:03:33 crc kubenswrapper[4751]: I0130 23:03:33.988275 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" path="/var/lib/kubelet/pods/bc2d69f7-78aa-4618-a287-008258e34b47/volumes" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.082553 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-89qbh_must-gather-xtff4_bc2d69f7-78aa-4618-a287-008258e34b47/copy/0.log" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.083400 4751 scope.go:117] "RemoveContainer" containerID="f02334ba80bf205d6fb3fa9e2fb257541f03ce6ab3c97fbdf7d1d6f815819596" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.083437 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-89qbh/must-gather-xtff4" Jan 30 23:03:34 crc kubenswrapper[4751]: I0130 23:03:34.123383 4751 scope.go:117] "RemoveContainer" containerID="d98a640d38d1a0008a9787079a3ae73e9ed1113f5304435185bdeae2c0722cd9" Jan 30 23:03:42 crc kubenswrapper[4751]: I0130 23:03:42.976446 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:42 crc kubenswrapper[4751]: E0130 23:03:42.977244 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:03:53 crc kubenswrapper[4751]: I0130 23:03:53.976319 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:03:53 crc kubenswrapper[4751]: E0130 23:03:53.978745 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:05 crc kubenswrapper[4751]: I0130 23:04:05.975734 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:05 crc kubenswrapper[4751]: E0130 23:04:05.976822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:18 crc kubenswrapper[4751]: I0130 23:04:18.977304 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:18 crc kubenswrapper[4751]: E0130 23:04:18.978700 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:31 crc kubenswrapper[4751]: I0130 23:04:31.982804 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:31 crc kubenswrapper[4751]: E0130 23:04:31.983612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:43 crc kubenswrapper[4751]: I0130 23:04:43.544849 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:43 crc kubenswrapper[4751]: E0130 23:04:43.550710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vgfkp_openshift-machine-config-operator(9acdd0f1-560b-4246-b045-c598c5bbb93d)\"" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" Jan 30 23:04:57 crc kubenswrapper[4751]: I0130 23:04:57.976586 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:04:58 crc kubenswrapper[4751]: I0130 23:04:58.777060 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8"} Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.489664 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492176 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-content" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492223 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-content" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492246 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492257 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492319 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="gather" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492358 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="gather" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492419 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492432 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" Jan 30 23:06:32 crc kubenswrapper[4751]: E0130 23:06:32.492500 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-utilities" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.492513 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="extract-utilities" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.493623 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="copy" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.493672 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2d69f7-78aa-4618-a287-008258e34b47" containerName="gather" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.493727 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a67eb5-04b0-4bcd-814d-e59031703d25" containerName="registry-server" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.499653 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.523278 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.637211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.637287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.637398 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.739796 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.739901 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.740136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.740422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.740618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.763625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"redhat-marketplace-b4mk6\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:32 crc kubenswrapper[4751]: I0130 23:06:32.829821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:33 crc kubenswrapper[4751]: I0130 23:06:33.345313 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.034177 4751 generic.go:334] "Generic (PLEG): container finished" podID="e472c7cc-765c-470f-95aa-3982eefa2753" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" exitCode=0 Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.034239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd"} Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.034476 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerStarted","Data":"6b9d30ce2f94bfd7ccb71aeea4e3f32cb9dc52bc57142b67658be1b628cc6d99"} Jan 30 23:06:34 crc kubenswrapper[4751]: I0130 23:06:34.037376 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 23:06:36 crc kubenswrapper[4751]: I0130 23:06:36.078470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerStarted","Data":"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314"} Jan 30 23:06:37 crc kubenswrapper[4751]: I0130 23:06:37.093492 4751 generic.go:334] "Generic (PLEG): container finished" podID="e472c7cc-765c-470f-95aa-3982eefa2753" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" exitCode=0 Jan 30 23:06:37 crc kubenswrapper[4751]: I0130 23:06:37.093550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314"} Jan 30 23:06:38 crc kubenswrapper[4751]: I0130 23:06:38.108398 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerStarted","Data":"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177"} Jan 30 23:06:38 crc kubenswrapper[4751]: I0130 23:06:38.129170 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b4mk6" podStartSLOduration=2.647896669 podStartE2EDuration="6.129148238s" podCreationTimestamp="2026-01-30 23:06:32 +0000 UTC" firstStartedPulling="2026-01-30 23:06:34.036036256 +0000 UTC m=+6732.781858905" lastFinishedPulling="2026-01-30 23:06:37.517287825 +0000 UTC m=+6736.263110474" observedRunningTime="2026-01-30 23:06:38.126944577 +0000 UTC m=+6736.872767256" watchObservedRunningTime="2026-01-30 23:06:38.129148238 +0000 UTC m=+6736.874970897" Jan 30 23:06:42 crc kubenswrapper[4751]: I0130 23:06:42.830174 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:42 crc kubenswrapper[4751]: I0130 23:06:42.830836 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:43 crc kubenswrapper[4751]: I0130 23:06:43.911516 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-b4mk6" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" probeResult="failure" output=< Jan 30 23:06:43 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:06:43 crc kubenswrapper[4751]: > Jan 30 23:06:52 crc kubenswrapper[4751]: I0130 23:06:52.961801 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:53 crc kubenswrapper[4751]: I0130 23:06:53.007486 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:53 crc kubenswrapper[4751]: I0130 23:06:53.202696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:54 crc kubenswrapper[4751]: I0130 23:06:54.348166 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b4mk6" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" containerID="cri-o://0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" gracePeriod=2 Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:54.936644 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.022997 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") pod \"e472c7cc-765c-470f-95aa-3982eefa2753\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.023133 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") pod \"e472c7cc-765c-470f-95aa-3982eefa2753\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.023423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") pod \"e472c7cc-765c-470f-95aa-3982eefa2753\" (UID: \"e472c7cc-765c-470f-95aa-3982eefa2753\") " Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.024091 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities" (OuterVolumeSpecName: "utilities") pod "e472c7cc-765c-470f-95aa-3982eefa2753" (UID: "e472c7cc-765c-470f-95aa-3982eefa2753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.029604 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg" (OuterVolumeSpecName: "kube-api-access-psqzg") pod "e472c7cc-765c-470f-95aa-3982eefa2753" (UID: "e472c7cc-765c-470f-95aa-3982eefa2753"). InnerVolumeSpecName "kube-api-access-psqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.050344 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e472c7cc-765c-470f-95aa-3982eefa2753" (UID: "e472c7cc-765c-470f-95aa-3982eefa2753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.127054 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.127083 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psqzg\" (UniqueName: \"kubernetes.io/projected/e472c7cc-765c-470f-95aa-3982eefa2753-kube-api-access-psqzg\") on node \"crc\" DevicePath \"\"" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.127093 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e472c7cc-765c-470f-95aa-3982eefa2753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362261 4751 generic.go:334] "Generic (PLEG): container finished" podID="e472c7cc-765c-470f-95aa-3982eefa2753" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" exitCode=0 Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177"} Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4mk6" event={"ID":"e472c7cc-765c-470f-95aa-3982eefa2753","Type":"ContainerDied","Data":"6b9d30ce2f94bfd7ccb71aeea4e3f32cb9dc52bc57142b67658be1b628cc6d99"} Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362633 4751 scope.go:117] "RemoveContainer" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.362425 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4mk6" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.383065 4751 scope.go:117] "RemoveContainer" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.402022 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.415857 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4mk6"] Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.438546 4751 scope.go:117] "RemoveContainer" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.470154 4751 scope.go:117] "RemoveContainer" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" Jan 30 23:06:55 crc kubenswrapper[4751]: E0130 23:06:55.471534 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177\": container with ID starting with 0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177 not found: ID does not exist" containerID="0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471570 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177"} err="failed to get container status \"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177\": rpc error: code = NotFound desc = could not find container \"0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177\": container with ID starting with 0ada19261b0536d23d30c98eafdd81444fda732b69e0166994e9fa19e4ea9177 not found: ID does not exist" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471592 4751 scope.go:117] "RemoveContainer" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" Jan 30 23:06:55 crc kubenswrapper[4751]: E0130 23:06:55.471931 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314\": container with ID starting with 32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314 not found: ID does not exist" containerID="32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471957 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314"} err="failed to get container status \"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314\": rpc error: code = NotFound desc = could not find container \"32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314\": container with ID starting with 32fb450570e54dcbfe979f84d03debac4683fc54a50c4ae8a0f9a596d33df314 not found: ID does not exist" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.471976 4751 scope.go:117] "RemoveContainer" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" Jan 30 23:06:55 crc kubenswrapper[4751]: E0130 23:06:55.472486 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd\": container with ID starting with f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd not found: ID does not exist" containerID="f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.472512 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd"} err="failed to get container status \"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd\": rpc error: code = NotFound desc = could not find container \"f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd\": container with ID starting with f31ea8114f78928184f391b655f5d22472a79bf7bc4bea3334837d27ff413acd not found: ID does not exist" Jan 30 23:06:55 crc kubenswrapper[4751]: I0130 23:06:55.994063 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" path="/var/lib/kubelet/pods/e472c7cc-765c-470f-95aa-3982eefa2753/volumes" Jan 30 23:07:13 crc kubenswrapper[4751]: I0130 23:07:13.434728 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6989c95c85-6thsl" podUID="68910b8d-2ec3-4b7c-956c-e3d3518042cf" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 23:07:24 crc kubenswrapper[4751]: I0130 23:07:24.126712 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:07:24 crc kubenswrapper[4751]: I0130 23:07:24.127202 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.659617 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:07:32 crc kubenswrapper[4751]: E0130 23:07:32.660548 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-utilities" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-utilities" Jan 30 23:07:32 crc kubenswrapper[4751]: E0130 23:07:32.660573 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-content" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660582 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="extract-content" Jan 30 23:07:32 crc kubenswrapper[4751]: E0130 23:07:32.660602 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660610 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.660833 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e472c7cc-765c-470f-95aa-3982eefa2753" containerName="registry-server" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.664210 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.677120 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.746653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.746718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.746887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.849319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.849695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.849746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.850159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.850304 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:32 crc kubenswrapper[4751]: I0130 23:07:32.875455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"redhat-operators-p2b44\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:33 crc kubenswrapper[4751]: I0130 23:07:33.009445 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:33 crc kubenswrapper[4751]: I0130 23:07:33.602064 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:07:33 crc kubenswrapper[4751]: I0130 23:07:33.822868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerStarted","Data":"d0b5b81d968d8999b59ba948d3f60aa48b354267086a1f491f86c20822b3a714"} Jan 30 23:07:34 crc kubenswrapper[4751]: I0130 23:07:34.833776 4751 generic.go:334] "Generic (PLEG): container finished" podID="310d0b6f-f293-446a-8648-ca291f0f429b" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" exitCode=0 Jan 30 23:07:34 crc kubenswrapper[4751]: I0130 23:07:34.834022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49"} Jan 30 23:07:35 crc kubenswrapper[4751]: I0130 23:07:35.851890 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerStarted","Data":"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7"} Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.634821 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.638290 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.649354 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.747192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.747404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.747653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.849614 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.849698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.849859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.850606 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.850599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.878047 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"certified-operators-xvjxw\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.914895 4751 generic.go:334] "Generic (PLEG): container finished" podID="310d0b6f-f293-446a-8648-ca291f0f429b" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" exitCode=0 Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.914962 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7"} Jan 30 23:07:40 crc kubenswrapper[4751]: I0130 23:07:40.963818 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.445010 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.945893 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" exitCode=0 Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.946102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0"} Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.946345 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerStarted","Data":"3db843790871be207e59830516f8c964af91d135af682a34952dccf6663b4ce4"} Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.949477 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerStarted","Data":"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4"} Jan 30 23:07:41 crc kubenswrapper[4751]: I0130 23:07:41.988767 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p2b44" podStartSLOduration=3.458686529 podStartE2EDuration="9.988751407s" podCreationTimestamp="2026-01-30 23:07:32 +0000 UTC" firstStartedPulling="2026-01-30 23:07:34.836956786 +0000 UTC m=+6793.582779435" lastFinishedPulling="2026-01-30 23:07:41.367021664 +0000 UTC m=+6800.112844313" observedRunningTime="2026-01-30 23:07:41.988451009 +0000 UTC m=+6800.734273658" watchObservedRunningTime="2026-01-30 23:07:41.988751407 +0000 UTC m=+6800.734574056" Jan 30 23:07:42 crc kubenswrapper[4751]: I0130 23:07:42.967516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerStarted","Data":"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df"} Jan 30 23:07:43 crc kubenswrapper[4751]: I0130 23:07:43.009943 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:43 crc kubenswrapper[4751]: I0130 23:07:43.010005 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:07:44 crc kubenswrapper[4751]: I0130 23:07:44.062148 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:07:44 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:07:44 crc kubenswrapper[4751]: > Jan 30 23:07:44 crc kubenswrapper[4751]: I0130 23:07:44.987700 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" exitCode=0 Jan 30 23:07:44 crc kubenswrapper[4751]: I0130 23:07:44.987771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df"} Jan 30 23:07:46 crc kubenswrapper[4751]: I0130 23:07:46.004660 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerStarted","Data":"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541"} Jan 30 23:07:46 crc kubenswrapper[4751]: I0130 23:07:46.043033 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvjxw" podStartSLOduration=2.591036828 podStartE2EDuration="6.043009097s" podCreationTimestamp="2026-01-30 23:07:40 +0000 UTC" firstStartedPulling="2026-01-30 23:07:41.947786047 +0000 UTC m=+6800.693608696" lastFinishedPulling="2026-01-30 23:07:45.399758316 +0000 UTC m=+6804.145580965" observedRunningTime="2026-01-30 23:07:46.036690735 +0000 UTC m=+6804.782513384" watchObservedRunningTime="2026-01-30 23:07:46.043009097 +0000 UTC m=+6804.788831746" Jan 30 23:07:50 crc kubenswrapper[4751]: I0130 23:07:50.964170 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:50 crc kubenswrapper[4751]: I0130 23:07:50.964820 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:51 crc kubenswrapper[4751]: I0130 23:07:51.040727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:51 crc kubenswrapper[4751]: I0130 23:07:51.125568 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:51 crc kubenswrapper[4751]: I0130 23:07:51.294445 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.075618 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvjxw" podUID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerName="registry-server" containerID="cri-o://7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" gracePeriod=2 Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.757206 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.771270 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") pod \"0dc499e3-1ee1-422d-8adc-2a493249e84d\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.771434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") pod \"0dc499e3-1ee1-422d-8adc-2a493249e84d\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.771493 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") pod \"0dc499e3-1ee1-422d-8adc-2a493249e84d\" (UID: \"0dc499e3-1ee1-422d-8adc-2a493249e84d\") " Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.773665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities" (OuterVolumeSpecName: "utilities") pod "0dc499e3-1ee1-422d-8adc-2a493249e84d" (UID: "0dc499e3-1ee1-422d-8adc-2a493249e84d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.782526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z" (OuterVolumeSpecName: "kube-api-access-5jh6z") pod "0dc499e3-1ee1-422d-8adc-2a493249e84d" (UID: "0dc499e3-1ee1-422d-8adc-2a493249e84d"). InnerVolumeSpecName "kube-api-access-5jh6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.827283 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dc499e3-1ee1-422d-8adc-2a493249e84d" (UID: "0dc499e3-1ee1-422d-8adc-2a493249e84d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.874807 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.874864 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc499e3-1ee1-422d-8adc-2a493249e84d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:53 crc kubenswrapper[4751]: I0130 23:07:53.874881 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jh6z\" (UniqueName: \"kubernetes.io/projected/0dc499e3-1ee1-422d-8adc-2a493249e84d-kube-api-access-5jh6z\") on node \"crc\" DevicePath \"\"" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.060167 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:07:54 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:07:54 crc kubenswrapper[4751]: > Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090750 4751 generic.go:334] "Generic (PLEG): container finished" podID="0dc499e3-1ee1-422d-8adc-2a493249e84d" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" exitCode=0 Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090794 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvjxw" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541"} Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvjxw" event={"ID":"0dc499e3-1ee1-422d-8adc-2a493249e84d","Type":"ContainerDied","Data":"3db843790871be207e59830516f8c964af91d135af682a34952dccf6663b4ce4"} Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.090965 4751 scope.go:117] "RemoveContainer" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.116172 4751 scope.go:117] "RemoveContainer" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.116844 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.126944 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.127002 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.127025 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvjxw"] Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.144214 4751 scope.go:117] "RemoveContainer" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.203804 4751 scope.go:117] "RemoveContainer" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" Jan 30 23:07:54 crc kubenswrapper[4751]: E0130 23:07:54.204352 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541\": container with ID starting with 7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541 not found: ID does not exist" containerID="7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.204401 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541"} err="failed to get container status \"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541\": rpc error: code = NotFound desc = could not find container \"7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541\": container with ID starting with 7abd561fb603302683f5c49c12d21202e37ebf80d83390ca4c598b8d37bcc541 not found: ID does not exist" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.204430 4751 scope.go:117] "RemoveContainer" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" Jan 30 23:07:54 crc kubenswrapper[4751]: E0130 23:07:54.204973 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df\": container with ID starting with 5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df not found: ID does not exist" containerID="5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.205007 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df"} err="failed to get container status \"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df\": rpc error: code = NotFound desc = could not find container \"5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df\": container with ID starting with 5e655d5977a2334fcbb549875e9c9d0817b2d40617febd343a2467bdb65be2df not found: ID does not exist" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.205030 4751 scope.go:117] "RemoveContainer" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" Jan 30 23:07:54 crc kubenswrapper[4751]: E0130 23:07:54.205338 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0\": container with ID starting with 4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0 not found: ID does not exist" containerID="4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0" Jan 30 23:07:54 crc kubenswrapper[4751]: I0130 23:07:54.205374 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0"} err="failed to get container status \"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0\": rpc error: code = NotFound desc = could not find container \"4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0\": container with ID starting with 4bc5d3b77a00bae9b5f8954cc101dd6528b6bc713822265185cacb964b1d06f0 not found: ID does not exist" Jan 30 23:07:55 crc kubenswrapper[4751]: I0130 23:07:55.994135 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc499e3-1ee1-422d-8adc-2a493249e84d" path="/var/lib/kubelet/pods/0dc499e3-1ee1-422d-8adc-2a493249e84d/volumes" Jan 30 23:08:04 crc kubenswrapper[4751]: I0130 23:08:04.069557 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:08:04 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:08:04 crc kubenswrapper[4751]: > Jan 30 23:08:14 crc kubenswrapper[4751]: I0130 23:08:14.064277 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" probeResult="failure" output=< Jan 30 23:08:14 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 30 23:08:14 crc kubenswrapper[4751]: > Jan 30 23:08:23 crc kubenswrapper[4751]: I0130 23:08:23.072068 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:23 crc kubenswrapper[4751]: I0130 23:08:23.138691 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:23 crc kubenswrapper[4751]: I0130 23:08:23.324992 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.126789 4751 patch_prober.go:28] interesting pod/machine-config-daemon-vgfkp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.127200 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.127263 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.129414 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8"} pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.129606 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" podUID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerName="machine-config-daemon" containerID="cri-o://c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8" gracePeriod=600 Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.448566 4751 generic.go:334] "Generic (PLEG): container finished" podID="9acdd0f1-560b-4246-b045-c598c5bbb93d" containerID="c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8" exitCode=0 Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.448631 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerDied","Data":"c0845ebeb9e2f3643b084913909a3731e29a9707e2ccc2dbf0c44b6138b618a8"} Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.448948 4751 scope.go:117] "RemoveContainer" containerID="150bf9237b67858a6696caf528cece78540cd7b4dc8297fae7680f0dfba50d2b" Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.449095 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p2b44" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" containerName="registry-server" containerID="cri-o://23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" gracePeriod=2 Jan 30 23:08:24 crc kubenswrapper[4751]: I0130 23:08:24.996634 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.075144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") pod \"310d0b6f-f293-446a-8648-ca291f0f429b\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.075382 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") pod \"310d0b6f-f293-446a-8648-ca291f0f429b\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.075495 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") pod \"310d0b6f-f293-446a-8648-ca291f0f429b\" (UID: \"310d0b6f-f293-446a-8648-ca291f0f429b\") " Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.077045 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities" (OuterVolumeSpecName: "utilities") pod "310d0b6f-f293-446a-8648-ca291f0f429b" (UID: "310d0b6f-f293-446a-8648-ca291f0f429b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.080618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg" (OuterVolumeSpecName: "kube-api-access-pf7bg") pod "310d0b6f-f293-446a-8648-ca291f0f429b" (UID: "310d0b6f-f293-446a-8648-ca291f0f429b"). InnerVolumeSpecName "kube-api-access-pf7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.178182 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.178227 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf7bg\" (UniqueName: \"kubernetes.io/projected/310d0b6f-f293-446a-8648-ca291f0f429b-kube-api-access-pf7bg\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.195983 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "310d0b6f-f293-446a-8648-ca291f0f429b" (UID: "310d0b6f-f293-446a-8648-ca291f0f429b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.279732 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310d0b6f-f293-446a-8648-ca291f0f429b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465439 4751 generic.go:334] "Generic (PLEG): container finished" podID="310d0b6f-f293-446a-8648-ca291f0f429b" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" exitCode=0 Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465492 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4"} Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465541 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2b44" event={"ID":"310d0b6f-f293-446a-8648-ca291f0f429b","Type":"ContainerDied","Data":"d0b5b81d968d8999b59ba948d3f60aa48b354267086a1f491f86c20822b3a714"} Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465561 4751 scope.go:117] "RemoveContainer" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.465918 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2b44" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.472550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vgfkp" event={"ID":"9acdd0f1-560b-4246-b045-c598c5bbb93d","Type":"ContainerStarted","Data":"fb4f57a963641dd6b90b3701ece0df7387775d4b3de3e0cbb0cc1824d9bf62d7"} Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.508209 4751 scope.go:117] "RemoveContainer" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.522493 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.533199 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p2b44"] Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.536100 4751 scope.go:117] "RemoveContainer" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.590070 4751 scope.go:117] "RemoveContainer" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" Jan 30 23:08:25 crc kubenswrapper[4751]: E0130 23:08:25.590858 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4\": container with ID starting with 23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4 not found: ID does not exist" containerID="23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.590896 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4"} err="failed to get container status \"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4\": rpc error: code = NotFound desc = could not find container \"23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4\": container with ID starting with 23ce04b77992080995a97acd5ff649602eda71a4f1bff530ad5c5bd20f50fcb4 not found: ID does not exist" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.590947 4751 scope.go:117] "RemoveContainer" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" Jan 30 23:08:25 crc kubenswrapper[4751]: E0130 23:08:25.591393 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7\": container with ID starting with e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7 not found: ID does not exist" containerID="e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.591439 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7"} err="failed to get container status \"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7\": rpc error: code = NotFound desc = could not find container \"e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7\": container with ID starting with e883196a3480e1d87066cd617dc4e804d760c8a84854d7ee0d7dbbb42d1b8da7 not found: ID does not exist" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.591473 4751 scope.go:117] "RemoveContainer" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" Jan 30 23:08:25 crc kubenswrapper[4751]: E0130 23:08:25.591810 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49\": container with ID starting with 0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49 not found: ID does not exist" containerID="0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.591835 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49"} err="failed to get container status \"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49\": rpc error: code = NotFound desc = could not find container \"0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49\": container with ID starting with 0065d4cf9e477f08764ff504c4ca1f4ee5abf10fe8fe66bf90f4d02c315d7b49 not found: ID does not exist" Jan 30 23:08:25 crc kubenswrapper[4751]: I0130 23:08:25.990233 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310d0b6f-f293-446a-8648-ca291f0f429b" path="/var/lib/kubelet/pods/310d0b6f-f293-446a-8648-ca291f0f429b/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137235015024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137235016017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137217141016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137217141015457 5ustar corecore